You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have noticed that when scanning documents where the old practice (going back to typewriters) to use rows of asterisks or equal signs as text separators, tesseract performs poorly.
Esample
Some document
Line one
**************************
Line two
=== === === === === === ===
Line three
On something like this, tesseract would strive to match the line made of asterisks or equal signs to some text, resulting in things like EERKKRKKERKKREAKREKKAKRKKKAK or RRR RRR NETT RRR RRR which is not typically the desired outcome.
It is my understanding that the issue might likely come from the training data rather than the engine itself. If this is so, I wonder if the training sets could be augmented to consider these cases.
The text was updated successfully, but these errors were encountered:
I know from personal experience that Tesseract can be trained to recognize sequences like "........." (often used in tables) if the correct number of dots was part of the training data. Therefore I am rather sure that your examples could also be recognized with the right model. Typically for real world documents humans don't like counting dots or other sequences of similar characters and omit them in the transcription. And obviously the generated training data did not contain such sequences.
I have noticed that when scanning documents where the old practice (going back to typewriters) to use rows of asterisks or equal signs as text separators, tesseract performs poorly.
Esample
On something like this, tesseract would strive to match the line made of asterisks or equal signs to some text, resulting in things like
EERKKRKKERKKREAKREKKAKRKKKAK
orRRR RRR NETT RRR RRR
which is not typically the desired outcome.It is my understanding that the issue might likely come from the training data rather than the engine itself. If this is so, I wonder if the training sets could be augmented to consider these cases.
The text was updated successfully, but these errors were encountered: