@capacitor-community/image-to-text
Capacitor plugin for image to text processing using Apple's Vision Framework for iOS and MLKit's Vision Framework for Android..
This project was forked from the Cap ML plugin written by Vennela Kodali. It was refactored and converted to Capacitor 4.
- For Capacitor 4 projects use v4.x
- For Capacitor 5 projects use v5.x
- For Capacitor 6 projects use v6.x
npm install @capacitor-community/image-to-text
There is one method detectText
that takes a filename of an image and will return the text associated with it.
Add the following to your application:
import { Ocr, TextDetections } from '@capacitor-community/image-to-text';
...
const data: TextDetections = await Ocr.detectText({ filename: '[get-filename-of-image-jpg]' });
for (let detection of data.textDetections) {
console.log(detection.text);
}
The above code will convert the image file and console.log
the text found in it.
You can use the @capacitor/camera
plugin to take a photo and convert it to text:
import { Camera, CameraResultType, CameraSource } from '@capacitor/camera';
import { Ocr, TextDetections } from '@capacitor-community/image-to-text';
...
const photo = await Camera.getPhoto({
quality: 90,
allowEditing: true,
resultType: CameraResultType.Uri,
source: CameraSource.Camera,
});
const data: TextDetections = await Ocr.detectText({ filename: photo.path });
for (let detection of data.textDetections) {
console.log(detection.text);
}
A full sample application can be found here.
No additional setup is required to use this plugin in a iOS Capacitor project.
Your project must include a google-services.json
file stored in the Android project folder (usually android/app
).
- Sign in to console.firebase.google.com
- Click on
Add Project
and follow through the steps. - Click the
Android
icon to create an android app. - Enter the
Package Name
which must match your apps package name (You can find it inandroid/app/AndroidManifest.xml
). - Click
Register App
- Download
google-services.json
and save into your project'sandroid/app
directory.
The sample project has this in place in its build.gradle
(see here as a reference).
Note: Most starter Capacitor projects are preconfigured to load google-services.json
.
detectText(options: DetectTextFileOptions | DetectTextBase64Options) => Promise<TextDetections>
Detect text in an image
Param | Type | Description |
---|---|---|
options |
DetectTextFileOptions | DetectTextBase64Options |
Options for text detection |
Returns: Promise<TextDetections>
Prop | Type |
---|---|
textDetections |
TextDetection[] |
Prop | Type |
---|---|
bottomLeft |
[number, number] |
bottomRight |
[number, number] |
topLeft |
[number, number] |
topRight |
[number, number] |
text |
string |
Prop | Type |
---|---|
filename |
string |
orientation |
ImageOrientation |
Prop | Type |
---|---|
base64 |
string |
orientation |
ImageOrientation |
Members | Value |
---|---|
Up |
'UP' |
Down |
'DOWN' |
Left |
'LEFT' |
Right |
'RIGHT' |
Images are expected to be in portrait mode only, i.e. with text facing up. It will try to process even otherwise, but note that it might result in gibberish.
iOS and Android are supported. Web is not.
Feature | ios | android |
---|---|---|
ML Framework | CoreML Vision | Firebase MLKit |
Text Detection with Still Images | Yes | Yes |
Detects lines of text | Yes | Yes |
Bounding Coordinates for Text | Yes | Yes |
Image Orientation | Yes (Up, Left, Right, Down) | Yes (Up, Left, Right, Down) |
Skewed Text | Yes | Unreliable |
Rotated Text (<~ 45deg) | Yes | Yes (but with noise) |
On-Device | Yes | Yes |
SDK/ios Version | ios 13.0 or newer | Targets API level >= 16 Uses Gradle >= 4.1 com.android.tools.build:gradle >= v3.2.1 compileSdkVersion >= 28 |
Hippocratic License Version 2.0.
For more information, refer to LICENSE file