You should know, that TensorFlow written on C++ as core (backend) and Python as frontend languages.

Python was the first client language supported by TensorFlow and currently supports the most features. More and more of that functionality is being moved into the core of TensorFlow (implemented in C++) and exposed via a C API.

If you are working with TensorFlow not only as Python software engineer, from time to time you should use C++ environment and available code, in your work. Sometimes you need to clarify C API, sometimes use it to port Python available code to other language. Any way you have to have build – ready C++ code on your computer.

How you can prepare it?

You need build it from sources. There is short guide:

Read More

Kraken as height level API for TensorFlow.

Since today Kraken is high – level API and brain system for the most powerful deep – learning framework TensorFlow.

TensorFlow is the fastest growing solution for neural networks. Written on C++ language it shows huge performance on CPU and GPU hardware. Kraken could help us to build deep learning architecture at real time and test them in different ways and on different servers.

Using TensorFlow library as core of our Neural Network you can get lot’s of benefits as:

  • Many dimansion Pooling layer;
  • Many dimansion Normalization layer;
  • Many dimansion Convolution layer;
  • Densely-connected layer;
  • Many dimansion Pooling layer;
  • RNN and LSTM solutions;
  • Optimizer;

From today our tool is incredible powerful and strong.

Each machine learning task is related with big amount of data. Analyzing a network is a complex and confusing task. To resolve that issue, Google announced launch of visualization tools called TensorBoard.

Currently that is the most useful source-code tool. Unfortunately that tool works only with TensorFlow library from the box. There is no way to feed it with json or xml logs.

Deepening  into a self-written neural network you can’t avoid any data-visualization task. For that reason you can use Tensorboard from C/C++/Java or Swift application.

How to do that, I will describe further.

Read More

Forward propagation as well as backpropagation leads to some operations on matrixes. The most common one is a matrix multiplication. In order to perform matrix multiplication in reasonable time you will need to optimise your algorithms.

There is a simple way to do it on macOS by means of their Accelerate Framework . Actually this is an umbrella framework for vector-optimized operations:

  • vecLib.framework – Contains vector-optimized interfaces for performing math, big-number, and DSP calculations, among others.
  • vImage.framework – Contains vector-optimized interfaces for manipulating image data.

Cblas_sgemm function can help you reach really hight performance.

Actually, vecLib  is only a ported version of two libs BLAS and LAPACK.

cblas.h and vblas.h are the interfaces to Apple’s implementations of BLAS. You can find reference documentation in BLAS. Additional documentation on the BLAS standard, including reference implementations, can be found on the web starting from the BLAS FAQ page at these URLs: http://www.netlib.org/blas/faq.html and http://www.netlib.org/blas/blast-forum/blast-forum.html.

clapack.h is the interface to Apple’s implementation of LAPACK. Documentation of the LAPACK interfaces, including reference implementations, can be found on the web starting from the LAPACK FAQ page at this URL: http://netlib.org/lapack/faq.html

This is a good way to combine your code with C++ library on Linux and macOS platforms.
Read More

While working on GPU computing, I started wondering how much GPU memory my code uses.

It turned out that it is difficult to calculate how much of the GPU memory is available and how much is used  in the new macOS Sierra.

You might think that it is as simple as go to the list of devices and then to “PerformanceStatistics” which holds current parameters of the device.

Read More