Machine Learning Made Easy: Text Recognition Using Kotlin, MVVM, and Huawei ML Kit

Text recognition basically takes help from the concept of OCR.

OCR (optical character recognition) basically reads an image character by character and matches the current character with its previously stored data. It behaves exactly like a human reads.

Anywhere repetitive manual reading is done, we can replace it with text recognition. This includes e-commerce, learning, logistics, and a lot more.

Now, let’s discuss the Huawei Text Recognition Service.  

You can recognize text in both a static image or dynamic camera stream.

You can even call these APIs synchronously or asynchronously as per your application requirement. 

You can use this service on a device, i.e. a small machine learning algorithm that can be added to your application, and it will work perfectly fine.

You can use this service on the cloud, i.e. once we fetch our image, we can transmit that image onto the cloud and can get even better accuracy and result within milliseconds.

Below is the list of languages supported by ML Kit:

On device:

On cloud:

Development Process

Step 1: Create a new project in Android Studio. 

Step 2: Choose a dependency as per your project requirements.

Java
 




x


1
// Import the base SDK.
2
implementation 'com.huawei.hms:ml-computer-vision-ocr:1.0.3.300'
3
// Import the Latin-based language model package.
4
implementation 'com.huawei.hms:ml-computer-vision-ocr-latin-model:1.0.3.315'
5
// Import the Japanese and Korean model package.
6
implementation 'com.huawei.hms:ml-computer-vision-ocr-jk-model:1.0.3.300'
7
// Import the Chinese and English model package.
8
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:1.0.3.300'


If you want a lite version, use the below dependency:

Java
 




xxxxxxxxxx
1


1
implementation 'com.huawei.hms:ml-computer-vision-ocr:1.0.3.300'


Step 3: Automatically update the machine learning model:

Java
 




xxxxxxxxxx
1


1
<meta-data
2
    android:name="com.huawei.hms.ml.DEPENDENCY"
3
    android:value="ocr" />


Step 4: Add the below permissions in the manifest file:

Java
 




xxxxxxxxxx
1


1
<uses-permission android:name="android.permission.CAMERA" />
2
<uses-permission android:name="android.permission.INTERNET" />


I won’t be covering the part where how we can get image form device via camera or gallery.

Let’s jump into the TextRecognitionViewModel class, where we have received a bitmap that contains the user image.

Below is the code you can use to call the text recognition API and get the String response:

Java
 




xxxxxxxxxx
1
23


1
fun textRecognition() {
2
     val setting = MLRemoteTextSetting.Factory()
3
         .setTextDensityScene(MLRemoteTextSetting.OCR_LOOSE_SCENE)
4
         .setLanguageList(object : ArrayList<String?>() {
5
             init {
6
                 this.add("zh")
7
                 this.add("en")
8
                 this.add("hi")
9
                 this.add("fr")
10
                 this.add("de")
11
             }
12
         })
13
         .setBorderType(MLRemoteTextSetting.ARC)
14
         .create()
15
     val analyzer = MLAnalyzerFactory.getInstance().getRemoteTextAnalyzer(setting)
16
     val frame = MLFrame.fromBitmap(bitmap.value)
17
     val task = analyzer.asyncAnalyseFrame(frame)
18
     task.addOnSuccessListener {
19
         result.value = it.stringValue
20
     }.addOnFailureListener {
21
         result.value = "Exception occurred"
22
     }
23
 }


Disscussion

  1. I wanted to use cloud services, so I choose MLRemoteTextSetting().
  2. As per density of characters, we can set setTextDensityScene() to OCR_LOOSE_SCENE or OCR_COMPACT_SCENE
  3.  Once density is set, we will set text language by setLanguageList().
  4. We can pass a collection object of ArrayList<String> to it. I have added five languages to my model but you can add languages as per the need.
  5. MLRemoteTextSetting.ARC: Return the vertices of a polygon border in an arc format.
  6. Now, our custom MLRemoteTextSetting object is ready and we can pass this to MLTextAnalyzer object.

Next step is to create an MLFrame using the below code and provide your previously fetched image in bitmap format.

Java
 




x


 
1
MLFrame frame = MLFrame.fromBitmap(bitmap);


On the analyser object, we will be calling asyncAnalyseFrame(frame) and providing MLFrame which we recently created.

This will yield you a Task<MLText> object, on this object you will get two callbacks.

  1. onSuccess
  2. onFailure

You can save the new resource from onSuccess() and stop the analyzer to release detection resources with the analyzer.stop() method.

If you want to use the on device model, only the below changes are required.

Java
 




xxxxxxxxxx
1


1
MLTextAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalTextAnalyzer();
2
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
3
  .setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
4
  .setLanguage("en")
5
  .create();



Final result:


Conclusion

I hope you liked this article. I would love to hear your ideas on how you use this kit in your applications.

 

 

 

 

Top