Tensorflow in C++
With cppflow, you can execute TensorFlow models in C++ without Bazel, TensorFlow installation, or TensorFlow compilation. Tensor manipulation, eager execution, and running stored models straight from C++ are all possible.
Knowing the C++ library in detail
The TensorFlow repository includes the TensorFlow Lite for Microcontrollers C++ library. It is intended to be legible, adaptable, thoroughly tested, simple to integrate, and backward compatible with standard TensorFlow Lite.
The basic organization of the C++ library is described in the accompanying text, along with instructions for developing your project.
Document structure:
The micro root directory's structure is rather basic, given that it is housed within the vast TensorFlow repository.
Key files:
There are tests and the most crucial files for utilizing the TensorFlow Lite for Microcontrollers interpreter at the project's root.
all_ops_resolver.h or micro_mutable_op_resolver.h
All ops resolver.h takes a lot of memory because it includes every available operation. Use micro mutable op resolver. h to only include the operations your model requires in production applications. This can supply the commands the interpreter needs to run the model.
micro_error_reporter.h
Produces debugging data.
micro_interpreter.h
Includes code for handling and running models.
The build system enables the implementation of certain files on different platforms. These can be found under a directory bearing the platform's name, like sparkfun_edge.
Other directories include the following:
- Kernel, which includes the code and operation implementations.
- A collection of build tools and their output is called tools.
- Examples include a code snippet.
Start a new project
We advise utilizing the Hello World example as a model for new projects.
Implement the Arduino library:
If you're using Arduino, which you can get from the Arduino IDE and in Arduino Create, includes the Hello World example.
Go to File -> Examples after the library has been added. The final item on the list should be a TensorFlowLite: hello world example. Click hello world after selecting it to load the sample. After that, you can make a copy of the example and use it as the foundation for your work.
Conceive projects for several platforms.
With the help of a Makefile, TensorFlow Lite for Microcontrollers may produce standalone projects with all the required source files. Keil, Make, and Mbed are the currently supported environments.
Due to the necessity of downloading some substantial toolchains for the dependencies, this will require some time. When done, you should notice that a path like gen/Linux x86 64/prj/ has been generated with a few directories (the exact path depends on your host operating system). The produced project and source files are located in these directories.
You'll be able to locate the Hello World after executing the commanding gen/Linux x86 64/prj/hello world projects. The Keil project, for instance, will be located in hello world/Keil.
Execute the tests:
Use the next command to construct the library and launch all of its unit tests:
Make -f test tensorflow/lite/micro/tools/make
Use the following command to execute a specific test, substituting test name> with the test's name:
Make -f test name> tensorflow/lite/micro/tools/make/Makefile
The test names can be found in the project's Makefiles. Names of the tests in the "Hello World" example are specified in examples/hello-world/Makefile.inc, for instance.
Create binaries
Use the command below, swapping out "project name" for the name of the project you desire to build, to create a runnable binary for the provided project (such as an example application):
Use the command make -f tensorflow/lite/micro/tools/make/Makefile <project_name>_bin.
As an illustration, the command shown below will create a binary for the application Hello World:
Make -f hello world bin tensorflow/lite/micro/tools/make
The project will, by default, be built for the host operating system. Use TARGET= to indicate a different target architecture. The SparkFun Edge's Hello World example can be built using the example below:
TARGET=sparkfun edge hello world bin build -f tensorflow/lite/micro/tools/make
Target-specific source files will be utilized in place of the original code when a target is selected. The file constants, Cc and output handler. Cc, for instance, has SparkFun Edge implementations in the subfolder examples/hello-world/SparkFun edge, which are utilized when the target Sparkfun edge is provided.
The project's Makefiles contain the project names. The example of Hello World's binary names is specified in examples/hello-world/Makefile.inc, for instance.
Enhanced kernels
The reference kernels in the root of the tensorflow/lite/micro/kernels are developed entirely in C/C++ and do not use hardware optimizations tailored to particular platforms.
Subdirectories include kernels that have been optimized. For instance, the kernels/cms is-nn directory includes several CMSIS-NN-based improved kernels.
Replace "subdirectory name" in the following command with the name of the subdirectory containing the optimizations to create projects utilizing optimized kernels:
Makefile TAGS= tensorflow/lite/micro/tools/make=<subdirectory_name> generate_projects
You can add your own by designating a new subdirectory for your improvements. Pull submissions for fresh, improved implementations are welcomed.
Make an Arduino library:
The library manager in the Arduino IDE provides access to a nightly build of the Arduino library.
You can execute the following script from the TensorFlow repository to create a fresh build of the library if necessary:
./tensorflow/lite/micro/tools/ci build/test Arduino.sh
Tensorflow lite.zip, located in the directory gen/arduino x86 64/prj, contains the finished library.
Creating a TensorFlow in C++
In general, while developing TensorFlow code, it is best to decouple the development of the graph from its actual execution. This is because the graph is normally created once and then run several times with varied inputs. Even if you do not modify the graph's variables (such as weights), there is no need to construct the graph each time you wish to run it unless the graph is incredibly basic and the separation effort is ineffective.
Complete the data preparation
Let’s take the example for CNN graph, we will see data preparation for CNN and its other necessary information with the help of this we will learn to create the C++ in Tensorflow.
CreateGraphForImage is a method for making a graph.
It accepts a Boolean value indicating whether or not to unstack the picture. When you want to load only one picture, use false, and when you want to load a batch, use true. This is due to the logic that an additional dimension will be added while stacking a batch. However, if you wish to run the CNN with only one picture, you must have all four dimensions.
Note: It should be noted that while switching from unstack to stack, the graph must be recreated.
ReadTensorFromImageFile executes the graph formed by the preceding technique. You input a file's entire path name, and it returns a 3 or 4-dimensional Tensor.
The code for both these methods is almost identical.
Handling folders and path:
ReadFileTensors is concerned with folders and files. It takes a base folder string and a vector of [sub-folder, label value] pairs.
If you downloaded the photographs from Kaggle, you might have noticed two sub-folders under the train, for example, cats and dogs. Each one should be labeled with a number, and these pairings should be supplied as input.
The result is a vector of pairs, each consisting of a Tensor (of an image) and a label.
Here's one way to put it:
base_folder = "/Users/bennyfriedman/Code/TF2example/TF2example/data/cats_and_dogs_small/train";
vector<pair<Tensor, float>> all_files_tensors;
model.ReadFileTensors(base_folder, {make_pair("cats", 0), make_pair("dogs", 1)}, all_files_tensors);
We must open a directory, read its files,
To join two path strings use
io::JoinPath (include tensorflow/core/lib/io/path.h)
string folder_name = io::JoinPath(base_folder_name, “cats”);
To interact with the file system, use "Env". This utility class (not documented) gives you similar facilitation to C++17 std::filesystem.
Env* penv = Env::Default();
TF_RETURN_IF_ERROR(penv->IsDirectory(folder_name));
vector<string> file_names;
TF_RETURN_IF_ERROR(penv->GetChildren(folder_name, &file_names));
for(string file: file_names)
{
…
}
ReadFileTensors shuffles the images in the vector, allowing us to feed different images while training.
Creating the batches
ReadBatches captures all of the logic's primary requirements. You specify the base folder, the vectors of two subfolders and labels, and the batch size. In response, you will receive two Tensor vectors, one for pictures and one for labels. Each Tensor is a collection of pictures or labels according to your specified size.
It begins by reading the folder's content and subfolders into a vector. It next does some computations to determine how to divide the batches. The pairs of Tensors and labels are then split into two Input vectors, with each any element in the Tensor vector matching the any element in the labels vector.
vector<Input> one_batch_image;
one_batch_image.push_back(Input(tensor));
//Add more tensors
InputList one_batch_inputs(one_batch_image);
Scope root = Scope::NewRootScope();
auto stacked_images = Stack(root, one_batch_inputs);
ClientSession session(root);
vector<Tensor> out_tensors;
TF_CHECK_OK(session.Run({}, {stacked_images}, &out_tensors));
//use out_tensors[0]
As written above, the tensors go in as 3-dimensional tensors, and the batches created are 4-dimensional.
As written above, the tensors go in as 3 dimensional tensors and the batches that are created are 4 dimensional.