A few years ago, ML algorithms looked strange and difficult for an average software engineer. ML is growing really fast. Nowadays it is easy to improve production solution by some Artificial Intelligence. You don’t need to have twenty people in your Data Scientist department if you want to extend you service with smart analytics or Artificial Intelligence.
I will show you how to apply smart search in your service.
Currently, our service is a place, where each user can share their articles, documents, videos, calendar events, tasks and etc. So we have a huge database with users’ content. Now it is a problem for a user to search a certain document or event. All items have tags and full text search. But what about video and audio files?
The Flow Inspector is tool which can help you review your TensorFlow Graph in your Swift written program.
The video below shows the default layout of the Flow Inspector debugger and main interaction process.
The Four Parts of Debugging and the Debugging Tools
There are four parts to the debugging workflow:
- File Navigator – where you can select bin and source files.
- Source section to review your code and select certain function to review.
- Graph section to review your graph inside Flow Inspector.
- Console output section to review output and errors in your program.
Flow Inspector Alpha version is available on GitHub.
If you work with Swift for TensorFlow project, sooner or later you will face the debug problem. The root of the issue is that lldb can’t get access to TensorFlow graph in you swift program.
Official documentation describes the compilation process:
Once the tensor operations are desugared, a transformation we call “partitioning” extracts the graph operations from the program and builds a new SIL function to represent the tensor code. In addition to removing the tensor operations from the host code, new calls are injected that call into our new runtime library to start up TensorFlow, rendezvous to collect any results, and send/receive values between the host and the tensor program as it runs. The bulk of the Graph Program Extraction transformation itself lives in TFPartition.cpp.
Once the tensor function is formed, it has some transformations applied to it, and is eventually emitted to a TensorFlow graph using the code in TFLowerGraph.cpp. After the TensorFlow graph is formed, we serialize it to a protobuf and encode the bits directly into the executable, making it easy to load at program runtime.
Actually the final graph is serialized into protobuf bytes and copied directly into the executable file.
I made a small debug tool, – Flow Inspector which can handle that problem.
You can find package template and readme on my GitHub page.
Today, 1th of June Google brain team committed new code in public.
There are some interesting points:
1) High level APIs will be presented as a separate SwiftPM package under github.com/tensorflow.
High level APIs were added earlier purely to explore the programming model, not to be usable by anyone. Having high level APIs be part of the stdlib module conveys a wrong message for beta testers, and it has been confusing ever since our open source release.
2) Supporting Python code is one of priority:
- Improved Python diagnostics related to member access.
- Improved Python C API functions for binary arithmetic operations.
3) Improved cross-device sends and receives support.
4) Lots of work done around supporting generic @dynamicCallable methods.