Automate your workflow with Nanonets
Request a demo Get Started

It can be a challenge to scrape or extract data from certain document types using RPA or UI automation technologies. Automating data extraction from ID cards, for example, when faced with a variety of non-standardized formats, can be extremely difficult.

AI-based OCR software like Nanonets can augment the capabilities of RPA tools such as UiPath when faced with such complex documents!

Nanonets scans & "reads" documents to recognize and extract data contextually. By leveraging Nanonets on UiPath, users can create intelligent automations that can handle data extraction use cases from even the most complex document types. Nanonets allows RPA tools to "understand" the document, so that their automated actions can function intelligently and avoid a templatized approach.

Nanonets is an OCR software with advanced AI/ML capabilities that allow it to take complex document data extraction use cases.

In this article, let's look at how Nanonets can enhance UiPath workflows to make RPA bots intelligent. We will focus on supercharging UiPath ID card data extraction workflows.

Table of Contents

Benefits of an RPA with advanced OCR

  • Automate data capture from all structured/unstructured documents.
  • No template setup required.
  • Integrate easily within your workflows. Make your bots smarter.
  • Increase Automation using AI to 90-95%
  • Reduce workload for BackOffice and AP teams

Understanding of RPA-ML-NanoNets

Optical character recognition (OCR) is a key feature in any good robotic process automation (RPA) solution. In short, OCR is a technology used to extract text from images and documents via mechanical or electronic means. It converts typed, handwritten or printed text into machine-encoded text – this data can then be used in electronic business processes without someone manually capturing it.

Unlike legacy OCR tools that need to be trained one character or template at a time, advanced AI-powered OCR solutions can recognize and capture data from machine printed documents with high levels of accuracy. Their ability to accurately decipher handwritten text is also rapidly improving.

Here's an example of how cognitive tools (including advanced OCR and machine learning) can be applied to a banking use case:

To integrate Nanonets with an RPA tool requires a basic understanding of OCR, RPA, and Machine learning. There are different approaches and algorithms for different tasks at various points.

Here is a visual representation of an RPA process working with Nanonets to create an optimised workflow:

RPA with Nanonets


ID card data extraction using RPA Uipath from File Input

Let's go through the steps to integrate UiPath with Nanonets for processing ID card images using File Input:

Create a UiPath Id card process Workflow

Create a main process from the UiPath home menu and then process. Provide Name, Location and description.


Once the project is open, create a new workflow.



Provide the name of the workflow and create it

Design the UiPath Id card process Flow

From Activities search for Invoke Code and drag and drop the activity in the workflow.


Write click on the activity and set it as start node or connect the start stage with the Invoke code activity stage.


Search for the message box in the activity pane and drag and drop the same in the workflow. Connect it with the Invoke Code stage.


Train an OCR model with Nanonets

Go to Nanonets and select to new model , create your Own model.


To train the model add multiple sample files which will be identical to those the file bot will process

Import at least 10 images and click next.

Before you start training the model, define the fields you want to fetch from the input documents. Here I have selected passport documents as input template, so I am using below values.

Surname

Name

Nationality

Date of Issue

Date of Expiry

Once the data fields are defined, select start training.

Now hover over the input files, select the require input and assign it to the data level like in the image below.


Select all inputs and assign fields on every training image. Please note that the better the training the more accurate the results.


Once you've add enough training data, click on Train Model. It will start building the model



Once the model is trained you will see the screen below:

These are nothing but different neural network architectures that learn from your data and our platform using AutoML architecture search chooses the best performing model.

Now go to My Models and open the model you have created.


Click on the model and select Integrate


Select the C# option and copy the code by clicking on the COPY CODE button


UiPath code Setup:

Go to UiPath, open the invoke code option and change the language to C-Sharp from the properties option

Open the Invoke Code option and Edit the code and paste the copied code in the box. Update the model ID, API Key and file path

The code will have your default API key, but in case you are modifying the API then change in the code as well.

To get API Key: Go to my Account -> API Key

Model ID:

Your UiPath Code should look like this:


Go to Variables and create a variable called Response. Create the scope as flowchart (flowchart name).


From Manage Package go to nuget.org and install RestSharp package.


Go to Imports and import the two namespaces below as these are external namespaces used by C Sharp code here.

RestSharp

System.Text




Double Click on the MessageBox and set the value as Response.


Now save the code and run it using CTRL+F5. You will get all the mentioned information (name, surname, nationality, Date of Issue, Date of Expiry) for the input image in a message box.

Use Uipath features to manipulate this information according to your specific requirements.


ID card data extraction using RPA Uipath from Web URL

In this part we usually provide the input as a web image. So we will provide the weblink as an image address. Previously we were providing the local path of the image file.

Here the steps are all identical except the code below that should be copied from Nanonets and used in the UiPath Code stage.

Your UiPath C sharp code should look like this:


Once it is done, save the code and run it accordingly.

Conclusion

In this blog, we learnt about how RPA bots can be made further intelligent using Nanonets AI-based OCR engine.

The engine is also smart to handle various formats, orientations, blurry images, etc:

Nanonets ID Card OCR

We hope you found this blog useful in your automation journey!

Nanonets is a  Silver-certified UiPath partner and you can download the Uipath connector right off the marketplace and get started.