I'm thinking of working with point cloud data stored on my client PC on my browser.
Now I know that FileReader can load file objects retrieved by FileAPI.
I have a question now, but would it be possible to stop loading the data that FileReader is reading in the middle and obtain and operate the data to the point where it was stopped?
As a usage case, we only need the header information of the point group data, so we don't need to load all of them, so we want to shorten the loading time even a little.You just need to load everything before you operate it, but the bottleneck is that the point group data can exceed several tens of gigabytes without hesitation and takes a long time to load.
Sample Code
var fileReader=new FileReader();
fileReader.onprogress=function(event){
if(event.loaded>1000000){
fileReader.abort();
}
}
fileReader.onabort=function(event){
// Candidate 1: I want to get the Arraybuffer that I read halfway here.
// Describe the process of manipulating point group data read halfway through
}
fileReader.readAsArrayBuffer(fileObj);
Unfortunately, FileReader
does not work as you wish.However, you can achieve the desired behavior by using the stream()
returned by the ReadableStream
interface in the File
object.
To briefly describe the flow of processing, first use the getReader()
method to obtain the ReadableStreamDefaultReader object.The https://developer.mozilla.org/ja/docs/Web/API/ReadableStreamDefaultReader/read"rel="nofollow noreferrer">read
method passes a function that handles partial data (chunk) from the stream.
Here are some simple test codes:All you have to do is load the files one after another and output them to the console.
<!DOCTYPE html>
<html>
<head>
<title>Readable stream test</title>
<script>
function doRead(){
let file = document.getElementById("file").files[0];
letstream=file.stream();
let reader=stream.getReader();
let chunkId = 0;
reader.read().then(function doChunk({value, done}){
if(done){
console.log("Read done.");
return;
}
console.log("Chunk#"+chunkId+"(size="+value.length+"):",
value);
++ chunkId;
return reader.read().then(doChunk);
});
}
</script>
</head>
<body>
<h1>Readable stream test</h1>
<p>Select file to load:<input type="file" id="file">/p>
<p>Start loading:<input type="button" value="Read" onclick="doRead()">/p>
</body>
</html>
Why don't you prepare a variable called result and save it every time an onprogress event occurs?
In addition, this method loads a string between 15 and 25 MB before onprogress fires.This should be large enough to load in about 5 seconds.
<script>
// Assume <input type="file" id="file"/>
var file= document.getElementById('file');
// Verify File API Support
if(window.File&window.FileReader&window.FileList&window.Blob){
function loadLocalImage(e){
var fileReader = new FileReader();
varfileData=e.target.files[0];
let result = [ ]
fileReader.onprogress=(event)=>{
// Maintain progress here
result.push(event.target.result)
if(event.loaded>100000)
fileReader.abort();
}
fileReader.onabort=(event)=>{
//result contains half-loaded data in an array
// join from array to string
// Deliverables previously loaded if this is interrupted
console.log(result.join("\n"))
}
fileReader.onload=(event)=>{
console.log(result.join("\n"))
}
fileReader.readAsText(e.target.files[0]);
}
file.addEventListener('change', loadLocalImage, false);
}
</script>
© 2024 OneMinuteCode. All rights reserved.