Adding Core ML Magic to a Camera based Workflow – Walter Tyree

This talk explores what would be involved to migrate a system that uses the camera and rectangle recognition to a system that uses machine learning to recognize only specific rectangles. After a quick explanation of how the AVFoundation and Vision framework work with the camera to recognize landmarks in images; we will explore the process needed to sprinkle in some Core ML magic. As with many things in iOS, the code itself is relatively straightforward; but some of the related tasks are much more complicated than they appear. This talk will touch on Vision, AVFoundation, Core ML and Create ML. 

 

About Walter Tyree:

Walter first began writing software using punched cards and FORTRAN in the 70s and then started programming on an Apple ][. He was successful but relatively unhappy in his career as a high school teacher and then as a manager in corporate IT departments. He left his last real job in 2010 to develop iOS software for his own company and as a contractor for other companies. His recent projects have focused on Core Location, Core Bluetooth and Vision frameworks. He also really enjoys debugging code. He has a brown and black dog, a patient wife, and at least three children.

@walterpt / tyreeapps.com

 

 

 

 

walter-tyree.png