Introduction to Core ML

Next Story

How to turn on the flashlight with iOS Swift

In the introduction to the new framework at WWDC 2017, the speakers made some interesting revelations.

Check out my notes below:

  • CoreML is a lower level library that offers a bunch of prebuilt models such as SVMs, Neural Networks, Random Forrests etc…
  • If you don’t want to work at the model level, there are 2 frameworks(Vision, NLP) that sit on top of CoreML for ready use
  • The Vision API offers object tracking, face detection, real time image recognition and many others… pre-trained
  • As for the NLP API, it offers language identification, Named Entity Recognition and many others
  • As far as Core ML goes, apple presented the conversion of python model to native ios machine learning models
  • Once the models are ready, all you have to do is drag the model into your xcode project and it is ready for use
  • Using the ML models in xcode is only a few lines of code:
    • instantiate your model
    • give it the parameters it needs
    • call predict or prediction function

CoreML, Vision API, and NLP API all look very promising. It will interesting to see what kind of apps are made with these libraries this upcoming year.

https://www.googletagmanager.com/gtag/js?id=UA-63695651-4