in , , , ,

Getting started with ML Kit

Hey Guys, most of you must have watched Google’s I/O 2018. Among all the amazing things announced at I/O this year, Google announced ML Kit. Google’s ML Kit is a feature that is available in firebase and helps to get any regular person taste machine learning, without knowing the ins and outs of machine learning.


Let’s get started

Step 1 – Create a new Project in Android Studio.

-> Empty Activity

-> Next->Next->Next->Finish


Step 2 – Once the project is created, you need to integrate firebase with your app. Firebase is a tool provided by Google. You can do this in two ways:- Manually or by using Tools->Firebase from Android Studio.


Go to

Sign into your Google account if you haven’t already

Click on Get Started

Click on Add Project

Once the project is created, Go to settings -> Project Settings(cogwheel icon on the top right corner of left pane)


Under “Your apps” click Add app -> Add firebase to android app

Enter package name of your app and click “Register”.

Now it will provide a google_services.json file. Download it

Now, switch to project view on android studio and copy that file to the app folder

Add the dependency to app/build.gradle as said in next prompt

Now, let gradle sync your project files

Note: if it fails or you wanna sync explicitly, use Ctrl+shift+A and enter “sync project with gradle files” and select the first option.

Hurray!, firebase is implemented


Now, time to implement ML kit.


In this tutorial, we will just be using Ml kit to add an image of a face and the app will tell various parameters based on the face. (Just the happiness probability of a person, for now, based on the image)

In your firebase project window, click ML kit option from the left pane.

Click get started -> Face detection -> Get Started


Click on Android in next window.


Add the dependencies as said in the next window


Now, in your Switch to Android view in the left pane in the android studio.

Go to layout/res/MainActivity.xml

Change ConstraintLayout to LinearLayout.

Add a textview

Add a two buttons

Your activity_main.xml should look identical to :

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android=""
       android:id="@+id/details" />

Now head on to

We need to add variables for the textview and both of the buttons. Add them as global variables so that you can access them all throughout the program. But, note: DO NOT INITIALISE THEM OUTSIDE ANY METHOD LIKE onCreate. THIS WILL LEAD TO NULL POINTER EXCEPTION.

Don’t worry, I’ll link you to my file and you can refer to that in case you want.

Now that we are done declaring the variables, we need to initialize them. We need to do this inside any method which gets executed during runtime. As ours is a very basic app, we can be better off initializing them inside onCreate method of


details = findViewById(;

gallery = findViewById(;

camera = findViewById(;




This will do the job.


Now, according to the Google’s machine learning documentation, we need to add some methods. Here, everything is handled by the classes provided by Firebase and we need not do any external coding ourselves, thus it reduces code size and we can get started easily.


Now, for this job, we use an external library called EasyImage. We use this to pick images from the camera or the gallery. You can follow its documentation here. As the name suggests, it’s really easy to use. You just need to add jitpack to the project level build.gradle and a dependency on the app level build.gradle. Sound jargon? Let’s do it.


In the android studio, under Gradle Scripts, you will find two Build.gradle files. Open the first one.


allprojects {

   repositories {



       maven {

           url "" //Jitpack  This should be present.





Add <code> implementation  ‘com.github.jkwiecien:EasyImage:1.3.1’ </code>

To the app level build.gradle


Both your files should look such.

Now we are done with the library adding part. All we need to do now is to work on picking the image from camera or the gallery and feeding it to the apt Firebase class.


We have got two buttons, remember?

We need to set clicklisteners to both of them so that on clicking each one a specific activity is triggered.

When we click the “Gallery” button, we use EasyImage to open the gallery picker and when we use the “Camera” button, we use EasyImage to open the camera picker.


Once image picking is successful, EasyImage provides callback methods via an overridden function so that we can manipulate the image as per our needs. You just need to follow EasyImage’s documentation.


In the onImagePicked method, we assign use firebase methods to pass on the image to firebase methods. You may refer to my for the methods.


Now that everything is done, we call <code> EasyImage.openCamera(MainActivity.this, 100); </code>

Note: The second parameter in the above method is called the request code. It can be any arbitrary integer. It’s required only when we have more than one activity and need to use methods like startActivityForResult(). For more info, you may refer the android docs. For now, no need to worry much about it.

In both the clicklisteners.


camera.setOnClickListener(new View.OnClickListener() {


   public void onClick(View view) {

       //Opens camera dialog

       EasyImage.openCamera(MainActivity.this, 100);



gallery.setOnClickListener(new View.OnClickListener() {


   public void onClick(View view) {

       //Opens gallery picker

       EasyImage.openGallery(MainActivity.this, 100);



Voila, the app is ready.

Code – .


Stay tuned with Tech Includes to get more insight knowledge about Programming and Tips and Tricks in Computer Science. Peace.

What do you think?

0 points
Upvote Downvote

Written by Jashaswee Jena


Leave a Reply

Your email address will not be published. Required fields are marked *





Gmail’s Smart Compose feature is now experimental, here is how you can get it

YouTube ‘take a break’ feature now rolling out for Android users