Hello,
I've run into a problem with a database export via data manager. As the data download can get rather huge, I'm communicating with the BE service via chunked request:
This works fine, until the downloaded file gets too big and the browser tab runs out of memory. For that reason, I'd like to stream the chunks into a file on disk rather than holding them in memory. How do I do that?
This is how my current call looks:
I have to wait until the whole request is done before I can do anything with the data. Optimally, I'd like to process the individual chunks as they arrive. Can I do that?
During tests with large datasets, the BE closed the connection in my face a time or two. Googling around suggested that I should set the connection header to keep-alive. However, Angular didn't appreciate the attempt (
'Refused to set unsafe header "Connection"'). Given that most of these threads are a few years old: do I have to do anything here to keep the connection alive, or was that not connected?
Hi Ewa Baumgarten,
Query 1: How can I save each chunk as it arrives?
If you are retrieving a significant amount of data, you may experience problems with memory usage. It is important to note that the data manager is not accountable for out-of-memory errors as it is only responsible for retrieving data based on the provided query.
Upon receiving your update, we investigated the issue further and discovered a potential solution. It is possible to stream the data into a file on disk as each chunk arrives instead of holding all the data in memory. One way to do this is to use the Blob object in JavaScript to create a file object, and the FileReader object to read the data chunks as they arrive.
Here's an example implementation of streaming the data into a file on disk as it arrives:
const chunkSize = 1024 * 1024; // 1MB chunk size let offset = 0;
this.dataManager.executeQuery(myQuery).then((result) => { const file = new Blob([], { type: 'application/octet-stream' }); const fileWriter = new FileWriter();
const processChunk = (chunk) => { fileWriter.write(chunk, offset, () => { offset += chunk.byteLength; }); };
const reader = new FileReader();
reader.onload = () => { if (reader.result.byteLength === 0) { // All chunks have been processed fileWriter.close(); saveAs(file, 'filename.ext'); return; } processChunk(reader.result); readNextChunk(); };
const readNextChunk = () => { const slice = result.slice(offset, offset + chunkSize); if (slice.byteLength === 0) { // All chunks have been processed fileWriter.close(); saveAs(file, 'filename.ext'); return; } reader.readAsArrayBuffer(slice); };
fileWriter.open((writer) => { fileWriter.truncate(0); readNextChunk(); }); }, (error) => { console.log(error); });
|
In this implementation, the processChunk function writes the incoming chunks to the file on disk using the FileWriter object. The readNextChunk function reads the next chunk of data from the result set and passes it to the processChunk function. The onload event handler for the FileReader object is called each time a chunk of data arrives, and it calls the processChunk function with the chunk, and then reads the next chunk of data by calling the readNextChunk function.
Note that this implementation uses the FileWriter and FileReader objects, which are part of the HTML5 File API, and may not be supported by all browsers. You may want to include a fallback implementation using the Blob object and the URL.createObjectURL method for browsers that do not support the File API.
Query 2: Do I have to keep the connection alive?
The behavior of keeping the connection alive is largely dependent on the server's
configuration and may not always be possible or effective. It is important to
note that the data manager is not accountable for keeping the connection alive
as it is only responsible for retrieving data based on the provided query.
Upon receiving your update, we investigated the issue further and discovered a potential solution. In the case of the Connection header, the error you're seeing is because modern browsers do not allow setting the Connection header directly. The header is considered an unsafe header and is controlled by the browser itself. You can't override this behavior and trying to do so will result in the error you're seeing.
However, you can try using other techniques to keep the connection alive, such as sending periodic heartbeat messages to the server or implementing a long-polling mechanism. These techniques can help ensure that the server doesn't terminate the connection prematurely.
Another approach you can take is to check if the server has closed the connection and then reestablish the connection if needed. For example, you can use the onerror event of the WebSocket object to detect if the connection has been closed and then try to reconnect.
If the above information is not helpful, then we would like to request some additional information to better understand the issue you are facing. Please provide us with the following details:
1) Kindly share the sample of the DataManager that reproduces the issue.
2) Kindly share the details of the query (myQuery) that you are using in the executeQuery method of the dataManager.
We appreciate your cooperation in providing us with the requested information, as it will help us provide a more effective solution to your query.
Regards,
Hemanth Kumar S
Hello Hemanth Kumar S,
this looks extremely promising; however, as we were pressed for time, we (naturally) had to implement a different approach. This is bound to come in extremely handy in the future and for the next person with this problem, though, so thank you so much for your reply.
Regards,
Ewa Baumgarten
Hi Ewa Baumgarten,
Thank you for your message. We are pleased to hear that you found our suggested approach helpful. If you feel that it adequately addresses your concern, please consider accepting it as the solution. Doing so will make it easier for other members to find the solution quickly.
Regards,
Hemanth Kumar S