Computer Vision For iOS Developers Course
- Must have clear understanding of iOS Development Basics
- MacOS system
- An iPhone and Apple developer account to implement some of the projects
Welcome to Computer Vision for iOS developers Course.
In this course, you'll learn the basics needed to understand Object Detection and Semantic Segmentation, and by the end of the course, you'll be able to train models that you can use in your apps.
We will cover the next topics in this course:
1) What is Computer Vision
2) What is Object Detection and Semantic Segmentation
3) Tools for Creating Image Datasets and labeling them
4) Image Dataset Augmentation
5) Tools and Environments for training neural networks
6) Integration of CoreML and TFLite models into iOS apps
7) 2 projects that use Computer Vision in real-world applications
This course is made using https://makeml.app product.
Who this course is for:
- iOS Developers who want to learn to train Object Detection and Semantic Segmentation neural networks by themselves
- Startup Founders that want to leverage AI in their products
Founder and CEO at MakeML.
MakeML is a macOS app, that helps iOS developers to train Object Detection and Semantic Segmentation Neural Networks.
I'm basically a Computer Vision engineer focusing on mobile applications.
Previously I was an iOS Software Developer mostly working in Swift, but also have strong experience with Objective-C.