Differences in Behavior between Actual iOS and Simulator Using OCR Engines

Asked 2 years ago, Updated 2 years ago, 39 views

Excuse me, I'm a beginner working on Objective-C.
I am creating a PDF viewer program on my iPad.

"At that time, ""Cut out the image taken with the screenshot, and use OCR to recognize the cut image."""

The simulator works properly, but it doesn't recognize the same behavior as the actual machine.
For example, if the image says "Hello," then in the case of the simulator, it is recognized as "Hello."
However, in the case of the actual machine, I am not sure if the characters are recognized like "chv2f,,,
The screenshots themselves have taken out images with the words "Hello" written on them.

There are no particular errors.
I'd like to put out the code, but I don't know what's wrong in the first place, so I can't put it up.
Are you not able to read the images created by the actual machine properly?
If you have any idea, please leave a comment.
My OCR software is tesseract-ocr.

Note:
When I changed the simulator version?, the simulator stopped working.
I knew that tesseract-ocr didn't work later than iOS6 (or 6.1?), so maybe it's because of the version.
If you have any further questions, please let me know.

ios objective-c tesseract

2022-09-29 22:21

1 Answers

Sorry for the trouble, I solved myself.

It seems that the reason was that the x,y values were out of alignment during the process of cutting out the image.
Due to the difference in resolution, I cut out the wrong place, so I didn't recognize it well.
I always use the iPad in the simulator, and I couldn't use the iPad Retina or the actual iPad (iPad3?).

In the process, I made the tesseract work on iOS7 instead of iOS6.
I don't think it has anything to do with it, but just in case.

Thank you for your comments.


2022-09-29 22:21

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.