Hi, first of all, I'm a newbie, I would like to know if there is a way to export the trade results from optimization steps, the way I'm doing it is I use
SetOption("GenerateReport", 1 ); // force generation of full report
but then I have to open each report with excel and export the trade results as CSV, it is time consuming , I've been playing around with batch, but I can't find a way to save the trades results from each optimization step using batch. Please help me.
No, you don't have to "open each report with Excel". In fact you should not be opening reports with Excel, because Excel is for .xls files not html files.
Reports are stored in HTML files that can easily be converted to CSV if you wish. Google for multiple solutions, WITHOUT Excel:
Thanks for the reply, what I actually do is I save the reports and then I open each one, then I go to the trades tab on the report, click on edit, then "copy table" and paste that table in excel and export it as CSV.
Altho, I understand what you mean, I noticed I can convert the HTML trades file created by the report in a CSV file, but still, I would have to modify it ( entry and exit prices are in the same cells as the time cells ),and export it as a csv file. what I liked about the batch functionality is that I could export it directly to csv without doing anything, thats why I thought maybe there was a way to use the export as csv of the batch functionallity in order to get the trades from each optimization step.
As I wrote, exporting from HTML (regardless of format) to CSV is very basic stuff. You can easily ask ChatGPT to write Python code for that and it will do that for you in a matter of seconds.
in general, I agree that Chat-GPT will be able to do it easily. Still, in this specific case to get a tidy .cvs output with all independent fields, it requires additional instructions to split the fields in the trades.html report that are written in the same <TD> field, separated by a single <BR> tag.
Splitting the fields also involves realigning the header fields and adding those missing due to the operation.
Moreover, like in my example, the count of the table header fields () is less than the table body fields (the report above includes some custom metrics). So it is a bit more complex than anticipated.
Maybe you could consider reviewing the report generation, revising the HTML code to make it more consistent for export, or perhaps, even better, directly adding export to .csv (as an option in the preferences).
Just tell ChatGPT to treat <BR> tag as column separator for CSV output and it will just do that for you. As far as Python code generation goes, ChatGPT is able to do a lot. Trust me.
I asked ChatGPT:
Write a Python code that reads the HTML file, and converts first <TABLE> into CSV file treating <TD> and <BR> as column separators
and it answered with this:
To achieve this task, you can use Python with the BeautifulSoup library to parse the HTML file and the csv module to write the data to a CSV file. Below is an example code that demonstrates how to convert the first <TABLE> in an HTML file into a CSV file, treating <TD> and <BR> as column separators.
First, make sure you have BeautifulSoup and lxml installed:
bash
pip install beautifulsoup4 lxml
Here's the code:
python
import csv
from bs4 import BeautifulSoup
# Read the HTML file
with open('input.html', 'r', encoding='utf-8') as file:
html_content = file.read()
# Parse the HTML content
soup = BeautifulSoup(html_content, 'lxml')
# Find the first <table> element
table = soup.find('table')
if table:
# Open a CSV file to write the output
with open('output.csv', 'w', newline='', encoding='utf-8') as csvfile:
csvwriter = csv.writer(csvfile)
# Iterate over rows in the table
for row in table.find_all('tr'):
row_data = []
# Iterate over data cells in the row
for cell in row.find_all('td'):
# Get cell text, replacing <br> with commas
cell_text = cell.get_text(separator=',').strip()
row_data.append(cell_text)
# Write the row data to the CSV file
csvwriter.writerow(row_data)
print("CSV file created successfully.")
else:
print("No <table> found in the HTML file.")
Explanation:
Reading the HTML file:
Open and read the contents of the HTML file into a variable html_content.
Parsing the HTML content:
Use BeautifulSoup to parse the HTML content. Here, we use the 'lxml' parser for better performance and compatibility.
Finding the first <table> element:
Use soup.find('table') to locate the first <table> element in the HTML.
Writing to a CSV file:
Open a CSV file named 'output.csv' for writing.
Iterate over each row (<tr>) in the table.
For each row, iterate over each data cell (<td>).
Use cell.get_text(separator=',') to get the text content of the cell, replacing <br> tags with commas.
Write the row data to the CSV file using csvwriter.writerow().
Handling cases with no <table>:
If no <table> is found, print a message indicating this.
This code will create a CSV file with the content of the first <table> in the HTML file, treating both <TD> and <BR> as column separators. Adjust the file paths ('input.html' and 'output.csv') as needed for your environment.