Skip to content
Snippets Groups Projects
Commit 5195679c authored by tqchen's avatar tqchen Committed by Tianqi Chen
Browse files

[DOCS] Improve docs naming, fix docs warnings

parent bb2b8620
No related branches found
No related tags found
No related merge requests found
Showing
with 209 additions and 85 deletions
......@@ -3,9 +3,9 @@ This folder contains various extension projects using TVM,
they also serve as examples on how to use TVM in your own project.
If you are interested in writing optimized kernels with TVM, checkout [TOPI: TVM Operator Inventory](../topi).
If you are interested in end to end deep learning model compilation, checkout [NNVM Compiler](https://github.com/dmlc/nnvm).
- [extension](extension) How to extend TVM C++ api along with python API.
- [ios_rpc](ios_rpc) iOS RPC server.
- [android_rpc](android_rpc) Android RPC server.
- [benchmark](benchmark) Example end to end compilation benchmarks
- [howto_deploy](howto_deploy) Tutorial on how to deploy TVM with minimum code dependency.
......@@ -20,3 +20,4 @@ Python API
contrib
dev
topi
nnvm/index
File moved
Python API
==========
NNVM API
========
This document contains the python API to NNVM compiler toolchain.
For user
.. toctree::
:maxdepth: 2
......
File moved
File moved
TVM Operator Inventory
----------------------
TOPI
----
.. automodule:: topi
Index
~~~~~
**List of operators**
List of operators
~~~~~~~~~~~~~~~~~
.. autosummary::
......@@ -52,8 +50,8 @@ Index
topi.broadcast_minimum
**List of schedules**
List of schedules
~~~~~~~~~~~~~~~~~
.. autosummary::
topi.generic.schedule_conv2d_nchw
......
Links to API References
=======================
Links to C++/JS API References
==============================
This page contains links to API references that are build with different doc build system.
......
TVM Design and Developer Guide
==============================
Building an IR stack for deep learning systems involves many
many systems-level design decisions.
Building a compiler stack for deep learning systems involves many many systems-level design decisions.
In this part of documentation, we share the rationale for the specific choices made when designing TVM.
.. toctree::
:maxdepth: 2
runtime
nnvm_json_spec
nnvm_overview
File moved
File moved
How to Deploy TVM Modules
=========================
How to Deploy Compiled Modules
==============================
We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy)
To run the example, you can use the following command
......@@ -59,3 +59,124 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
Refer [here](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target.
From android java TVM API to load model & execute can be refered at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source.
Deploy NNVM Modules
-------------------
NNVM compiled modules are fully embedded in TVM runtime as long as ```GRAPH_RUNTIME``` option
is enabled in tvm runtime. Check out the [TVM documentation](http://docs.tvmlang.org/) for
how to deploy TVM runtime to your system.
In a nutshell, we will need three items to deploy a compiled module.
Checkout our tutorials on getting started with NNVM compiler for more details.
- The graph json data which contains the execution graph.
- The tvm module library of compiled functions.
- The parameter blobs for stored parameters.
We can then use TVM's runtime API to deploy the compiled module.
Here is an example in python.
```python
import tvm
# tvm module for compiled functions.
loaded_lib = tvm.module.load("deploy.so")
# json graph
loaded_json = open(temp.relpath("deploy.json")).read()
# parameters in binary
loaded_params = bytearray(open(temp.relpath("deploy.params"), "rb").read())
fcreate = tvm.get_global_func("tvm.graph_runtime.create")
ctx = tvm.gpu(0)
gmodule = fcreate(loaded_json, loaded_lib, ctx.device_type, ctx.device_id)
set_input, get_output, run = gmodule["set_input"], gmodule["get_output"], gmodule["run"]
set_input("x", tvm.nd.array(x_np))
gmodule["load_params"](loaded_params)
run()
out = tvm.nd.empty(shape)
get_output(0, out)
print(out.asnumpy())
```
An example in c++.
```cpp
#include <dlpack/dlpack.h>
#include <tvm/runtime/module.h>
#include <tvm/runtime/registry.h>
#include <tvm/runtime/packed_func.h>
#include <fstream>
#include <iterator>
#include <algorithm>
int main()
{
// tvm module for compiled functions
tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile("deploy.so");
// json graph
std::ifstream json_in("deploy.json", std::ios::in);
std::string json_data((std::istreambuf_iterator<char>(json_in)), std::istreambuf_iterator<char>());
json_in.close();
// parameters in binary
std::ifstream params_in("deploy.params", std::ios::binary);
std::string params_data((std::istreambuf_iterator<char>(params_in)), std::istreambuf_iterator<char>());
params_in.close();
// parameters need to be TVMByteArray type to indicate the binary data
TVMByteArray params_arr;
params_arr.data = params_data.c_str();
params_arr.size = params_data.length();
int dtype_code = kDLFloat;
int dtype_bits = 32;
int dtype_lanes = 1;
int device_type = kDLCPU;
int device_id = 0;
// get global function module for graph runtime
tvm::runtime::Module mod = (*tvm::runtime::Registry::Get("tvm.graph_runtime.create"))(json_data, mod_syslib, device_type, device_id);
DLTensor* x;
int in_ndim = 4;
int64_t in_shape[4] = {1, 3, 224, 224};
TVMArrayAlloc(in_shape, in_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &x);
// load image data saved in binary
std::ifstream data_fin("cat.bin", std::ios::binary);
data_fin.read(static_cast<char*>(x->data), 3 * 224 * 224 * 4);
// get the function from the module(set input data)
tvm::runtime::PackedFunc set_input = mod.GetFunction("set_input");
set_input("data", x);
// get the function from the module(load patameters)
tvm::runtime::PackedFunc load_params = mod.GetFunction("load_params");
load_params(params_arr);
// get the function from the module(run it)
tvm::runtime::PackedFunc run = mod.GetFunction("run");
run();
DLTensor* y;
int out_ndim = 1;
int64_t out_shape[1] = {1000, };
TVMArrayAlloc(out_shape, out_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &y);
// get the function from the module(get output data)
tvm::runtime::PackedFunc get_output = mod.GetFunction("get_output");
get_output(0, y);
// get the maximum position in output vector
auto y_iter = static_cast<float*>(y->data);
auto max_iter = std::max_element(y_iter, y_iter + 1000);
auto max_index = std::distance(y_iter, max_iter);
std::cout << "The maximum position in output vector is: " << max_index << std::endl;
TVMArrayFree(x);
TVMArrayFree(y);
return 0;
}
```
......@@ -22,37 +22,20 @@ git submodule update
## Build the Shared Library
Our goal is to build the shared library:
- On Linux/OSX the target library is `libtvm.so`
- On Windows the target library is `libtvm.dll`
The minimal building requirement is
- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
You can edit `make/config.mk` to change the compile options, and then build by
`make`. If everything goes well, we can go to the specific language installation section.
### Building on Windows
TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**. In order to generate the VS solution file using cmake,
make sure you have a recent version of cmake added to your path and then from the tvm directory:
```bash
mkdir build
cd build
cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
```
This will generate the VS project using the MSVC 14 64 bit generator. Open the .sln file in the build directory and build with Visual Studio.
### Customized Building
Install prerequisites first:
Our goal is to build the shared libraries:
- On Linux/OSX the target library are `libtvm.so, libtvm_topi.so`
- On Windows the target library are `libtvm.dll, libtvm_topi.dll`
```bash
sudo apt-get update
sudo apt-get install -y python python-dev python-setuptools gcc libtinfo-dev zlib1g-dev
```
The minimal building requirement is
- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
- We highly recommend to build with LLVM to enable all the features.
- It is possible to build without llvm dependency if we only want to use CUDA/OpenCL
The configuration of tvm can be modified by ```config.mk```
- First copy ```make/config.mk``` to the project root, on which
any local modification will be ignored by git, then modify the according flags.
......@@ -62,8 +45,36 @@ The configuration of tvm can be modified by ```config.mk```
[LLVM Download Page](http://releases.llvm.org/download.html).
- Unzip to a certain location, modify ```config.mk``` to add ```LLVM_CONFIG=/path/to/your/llvm/bin/llvm-config```
- You can also use [LLVM Nightly Ubuntu Build](https://apt.llvm.org/)
- Note that apt-package append ```llvm-config``` with version number. For example, set ```LLVM_CONFIG=llvm-config-4.0``` if you installed 4.0 package
- By default CUDA and OpenCL code generator do not require llvm.
- Note that apt-package append ```llvm-config``` with version number.
For example, set ```LLVM_CONFIG=llvm-config-4.0``` if you installed 4.0 package
We can then build tvm by `make`.
After we build the tvm, we can proceed to build nnvm using the following script.
```bash
cd nnvm
make -j4
```
This will creates `libnnvm_compiler.so` under the `nnvm/lib` folder.
If everything goes well, we can go to the specific language installation section.
### Building on Windows
TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**.
In order to generate the VS solution file using cmake,
make sure you have a recent version of cmake added to your path and then from the tvm directory:
```bash
mkdir build
cd build
cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
```
This will generate the VS project using the MSVC 14 64 bit generator.
Open the .sln file in the build directory and build with Visual Studio.
In order to build with LLVM in windows, you will need to build LLVM from source.
You need to run build the nnvm by running the same script under the nnvm folder.
## Python Package Installation
......@@ -77,7 +88,7 @@ There are several ways to install the package:
The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ```setup``` again)
```bash
export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:${PYTHONPATH}
export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:/path/to/tvm/nnvm/python:${PYTHONPATH}
```
2. Install tvm python bindings by `setup.py`:
......@@ -89,4 +100,5 @@ There are several ways to install the package:
# providing --user flag may trigger error during installation in such case.
cd python; python setup.py install --user; cd ..
cd topi/python; python setup.py install --user; cd ../..
cd nnvm/python; python setup.py install --user; cd ../..
```
### NNPACK for Multi-Core CPU Support in TVM
[NNPACK](https://github.com/Maratyszcza/NNPACK) is an acceleration package
for neural network computations, which can run on x86-64, ARMv7, or ARM64 architecture CPUs.
Using NNPACK, higher-level libraries like _MXNet_ can speed up
the execution on multi-core CPU computers, including laptops and mobile devices.
***Note***: AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose.
***Note***: AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose.
For regular use prefer native tuned TVM implementation.
_TVM_ supports NNPACK for forward propagation (inference only) in convolution, max-pooling, and fully-connected layers.
......@@ -29,7 +30,7 @@ The following table explains under which conditions NNPACK will work.
### Build/Install LLVM
LLVM is required for CPU codegen that needs LLVM.
Since LLVM takes long time to build from source, you can download pre-built version of LLVM from [LLVM Download Page](http://releases.llvm.org/download.html).
For llvm 4.0 you can do the following step :
For llvm 4.0 you can do the following step :
```bash
# Add llvm repository in apt source list
......@@ -63,7 +64,7 @@ apt-get install -y \
If the trained model meets some conditions of using NNPACK,
you can build TVM with NNPACK support.
Follow these simple steps:
Follow these simple steps:
* Build NNPACK shared library with the following commands. _TVM_ will link NNPACK dynamically.
Note: The following NNPACK installation instructions have been tested on Ubuntu 16.04.
......@@ -77,7 +78,7 @@ cd ninja
./configure.py --bootstrap
```
Set the environment variable PATH to tell bash where to find the ninja executable. For example, assume we cloned ninja on the home directory ~. then we can added the following line in ~/.bashrc.
Set the environment variable PATH to tell bash where to find the ninja executable. For example, assume we cloned ninja on the home directory ~. then we can added the following line in ~/.bashrc.
```bash
export PATH="${PATH}:~/ninja"
```
......@@ -118,27 +119,4 @@ after configuration use `make` to build TVM
```bash
make
make install
```
#### Python Package Installation
The python package for [tvm](https://github.com/dmlc/tvm) depends of [topi](https://github.com/dmlc/tvm/tree/master/topi).
The tvm python package is located at `tvm/python` and topi python package is located in `tvm/topi/python` folder.
There are several ways to install the package, in all these cases the TVM library and TOPI must be present in the python env:
1. Set the environment variable PYTHONPATH to tell python where to find the libraries. For example, assume we cloned tvm on the home directory ~. then we can added the following line in ~/.bashrc. It is recommended for developers who may change the codes. The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call setup again)
```bash
export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:${PYTHONPATH}
```
2. Install tvm and topi python bindings by setup.py:
```bash
# install tvm package for the current user
cd topi/python
python setup.py install --user;
cd ../../python
python setup.py install --user;
```
TVM Documentation
=================
Welcome to TVM documentation.
Contents
--------
Get Started
-----------
.. toctree::
:maxdepth: 1
self
how_to/install
tutorials/index
faq
how_to/deploy
how_to/integrate
how_to/contribute
faq
API Reference
-------------
.. toctree::
:maxdepth: 2
api/python/index
dev/index
api_links
Developer Guide
---------------
.. toctree::
:maxdepth: 2
dev/index
nnvm_top
Index
-----
.. toctree::
:maxdepth: 1
genindex
Core Tensor Operators
=====================
NNVM Core Tensor Operators
==========================
This page contains the list of core tensor operator primitives pre-defined in NNVM.
The core tensor operator primitives(``nnvm.top``) covers typical workloads in deep learning.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment