I am studying deep learning using images.
I have a question this time, so I would like to hear your opinion.
In general, the learning data used for image recognition on the web reduces the capacity, so I think there are a lot of jpeg images.
However, I think that the camera image used in the built-in system will not be compressed and the raw image will be stored in memory for recognition processing.
Now, if you use the learning model (jpeg) on the web, isn't there a difference in perception because the image format is different from the embedded system?The raw image has more information, and the jpeg image is compressed, so the amount of information is small, so I wondered.
I look forward to your answers.
algorithm machine-learning
What do you mean by "high or low information"?Whether it's JPEG or RAW, when learning and inferring, it's the amount of information about the number of pixels x color depth?
Of course, JPEG is irreversible compression, so it is different from the RAW image.This is not "some amount of information," but "information" in the first place is different.The quality of the information, not the amount, is decreasing.By increasing the kernel size of the pooling, it seems to be able to absorb it to some extent (I learned it in JPEG and made inferences in Bitmap, and it gave me satisfactory accuracy).Of course, it worked in my case, but not all cases.In the first place, the compression rate of JPEG is unknown.)
581 PHP ssh2_scp_send fails to send files as intended
915 When building Fast API+Uvicorn environment with PyInstaller, console=False results in an error
613 GDB gets version error when attempting to debug with the Presense SDK (IDE)
573 rails db:create error: Could not find mysql2-0.5.4 in any of the sources
618 Uncaught (inpromise) Error on Electron: An object could not be cloned
© 2024 OneMinuteCode. All rights reserved.