If you haven’t heard about yet, it‘s the newest ML offering on Google Cloud and lets you build custom ML models trained on your own data — no model code required. It’s currently available for images, text, and translation models. There are lots of resources out there to help you prepare your data and train models in AutoML, so in this post I want to focus on the prediction (or serving) part of AutoML. AutoML I’ll walk you through building a simple web app to generate predictions on your trained model. It makes use of Firebase and Cloud Functions so it’s entirely serverless (yes, I put and in the same blog post 🙄). Here’s the app architecture: serverless ML Want to skip to the code? It’s all available in . this GitHub repo The AutoML API I was particularly excited to discover that in addition to providing an entire UI for building and training models, for adding training data, deploying models, generating predictions, and more. Let’s say you’re crowdsourcing training data for your model: with the AutoML API you could dynamically add new data to your project’s dataset and regularly train updated versions of your model. I’ll cover that in a future post, here I’ll focus on the prediction piece. AutoML has an API For this demo we’ll build a web app for generating predictions on a trained AutoML Vision model (though it could easily be adapted to AutoML NL since they use the same API). The particular model I’ll be querying can . On the frontend, users will be able to upload an image for prediction. Our app will upload that image to Firebase Storage, which will kick off a Cloud Function. Inside the function we’ll call the AutoML API and return the prediction data to our frontend client. The finished product looks like this: detect the type of cloud in an image Setting up your Firebase project is a great way to get apps up and running quickly without worrying about managing servers. It provides a variety of SDKs that make it easy to do things like upload images, save data, and authenticate users directly from client-side JavaScript. Firebase For this blog post I’ll assume you already have a trained AutoML Vision model that’s ready for predictions. The next step is to associate this project with Firebase. Head over to the and click . Then click on the dropdown and select the Cloud project where you’ve created your AutoML model. If you’ve never used Firebase before, you’ll also need to . Firebase console Add project install the CLI Next, clone the code from and into the directory where you’ve downloaded it. To initialize Firebase in that directory run and select Firestore, Functions, Hosting, and Storage when prompted (this demo uses all four): this GitHub repo cd firebase init Now we’re ready to go. In the next step we’ll set up and deploy the Cloud Function that calls AutoML. AutoML + Cloud Functions for Firebase You can use Cloud Functions independently of Firebase, but since I’m using so many Firebase features in my app already, I’ll make use of the handy . Take a look at the file and update the 3 variables at the top to reflect the info for your project: Firebase SDK for Cloud Functions [functions/index.js](https://github.com/sararob/automl-api-demo/blob/master/functions/index.js) Our Cloud Function is defined in . To trigger this function whenever a file is added to our Storage bucket we use: . Here’s what’s happening in the function: exports.callCustomModel functions.storage.object().onFinalize() Download the image to our Cloud Functions file system(we can use the dir to do this) tmp/ Base64 encode the image to prepare it for the AutoML prediction request Make the AutoML prediction request using the handy package nodejs-automl Write the prediction response to Cloud Firestore We can create an AutoML prediction client with 2 lines of code: The request JSON to make an AutoML prediction looks like this: All we need to do to send this to the AutoML API is created a prediction client and call : predict() Time to deploy the function. From the root directory of this project, run . When the deploy completes you can test it out by navigating to the section of your Firebase console and uploading an image: firebase deploy --only functions Storage Uploading an image to Firebase Storage Then, head over to the part of the console to look at the logs. If the prediction request completed successfully, you should see the prediction response JSON in the logs: Functions Function logs Inside the function, we also write the prediction metadata to so that our app can display this data on the client. In the Firestore console, you should see the metadata saved in a images/ collection: Firestore Prediction metadata in Cloud Firestore With the function working, it’s time to set up the app frontend. Putting it all together To test the frontend locally, run the command from the root directory of your project and navigate to . Click on the button in the top right. If the image you uploaded returned a prediction from your model, you should see that displayed in the app. Remember that this app is configured for my cloud detector model, but you can easily modify the code to make it work for your own domain. When you upload a photo, check your Functions, Firestore, and Storage dashboards to ensure everything is working. firebase serve localhost:5000 Upload a cloud photo Finally, let’s make use of Firebase Hosting to deploy the frontend so we can share it with others! Deploying the app is as simple as running the command . When the deploy finishes your app will be live at your own domain. firebase deploy --only hosting firebaseapp.com That’s it! We’re getting predictions from a custom ML model with entirely serverless technology. To dive into the details of everything covered in this post, check out these resources: Full code on GitHub AutoML and docs Vision Natural Language Firebase docs Let me know what you think in the comments or find me on Twitter at . @SRobTweets