diff --git a/apps/README.md b/apps/README.md
index 254f8c26a51036d13d432cb6e63cf55be89096d1..2345cc3ab548b80fbf4e4a144f1b11ba801deb0b 100644
--- a/apps/README.md
+++ b/apps/README.md
@@ -3,9 +3,9 @@ This folder contains various extension projects using TVM,
 they also serve as examples on how to use TVM in your own project.
 
 If you are interested in writing optimized kernels with TVM, checkout [TOPI: TVM Operator Inventory](../topi).
-If you are interested in end to end deep learning model compilation, checkout  [NNVM Compiler](https://github.com/dmlc/nnvm).
 
 - [extension](extension) How to extend TVM C++ api along with python API.
 - [ios_rpc](ios_rpc) iOS RPC server.
 - [android_rpc](android_rpc) Android RPC server.
+- [benchmark](benchmark) Example end to end compilation benchmarks
 - [howto_deploy](howto_deploy) Tutorial on how to deploy TVM with minimum code dependency.
diff --git a/nnvm/examples/benchmark/gpu_imagenet_bench.py b/apps/benchmark/gpu_imagenet_bench.py
similarity index 100%
rename from nnvm/examples/benchmark/gpu_imagenet_bench.py
rename to apps/benchmark/gpu_imagenet_bench.py
diff --git a/nnvm/examples/benchmark/rasp_imagenet_bench.py b/apps/benchmark/rasp_imagenet_bench.py
similarity index 100%
rename from nnvm/examples/benchmark/rasp_imagenet_bench.py
rename to apps/benchmark/rasp_imagenet_bench.py
diff --git a/docs/api/python/index.rst b/docs/api/python/index.rst
index 7b9afaca32f516d3e1ac085afc72f1886a0daf09..a6bed557dd3b0ec4752fa3fca41a96f79a7bac3d 100644
--- a/docs/api/python/index.rst
+++ b/docs/api/python/index.rst
@@ -20,3 +20,4 @@ Python API
    contrib
    dev
    topi
+   nnvm/index
diff --git a/nnvm/docs/api/python/compiler.rst b/docs/api/python/nnvm/compiler.rst
similarity index 100%
rename from nnvm/docs/api/python/compiler.rst
rename to docs/api/python/nnvm/compiler.rst
diff --git a/nnvm/docs/api/python/frontend.rst b/docs/api/python/nnvm/frontend.rst
similarity index 100%
rename from nnvm/docs/api/python/frontend.rst
rename to docs/api/python/nnvm/frontend.rst
diff --git a/nnvm/docs/api/python/graph.rst b/docs/api/python/nnvm/graph.rst
similarity index 100%
rename from nnvm/docs/api/python/graph.rst
rename to docs/api/python/nnvm/graph.rst
diff --git a/nnvm/docs/api/python/index.rst b/docs/api/python/nnvm/index.rst
similarity index 82%
rename from nnvm/docs/api/python/index.rst
rename to docs/api/python/nnvm/index.rst
index 2c574ac60994f5275bff04bf72c57dd726b02261..c0e5912c76bef998611c5cfd1c4ce2c091513bf0 100644
--- a/nnvm/docs/api/python/index.rst
+++ b/docs/api/python/nnvm/index.rst
@@ -1,9 +1,7 @@
-Python API
-==========
+NNVM API
+========
 
 This document contains the python API to NNVM compiler toolchain.
-For user
-
 
 .. toctree::
    :maxdepth: 2
diff --git a/nnvm/docs/api/python/symbol.rst b/docs/api/python/nnvm/symbol.rst
similarity index 100%
rename from nnvm/docs/api/python/symbol.rst
rename to docs/api/python/nnvm/symbol.rst
diff --git a/nnvm/docs/api/python/top.rst b/docs/api/python/nnvm/top.rst
similarity index 100%
rename from nnvm/docs/api/python/top.rst
rename to docs/api/python/nnvm/top.rst
diff --git a/docs/api/python/topi.rst b/docs/api/python/topi.rst
index ffdb076a226010be9e0ac87d74f5736a20da84af..823b941dbc7841bac99872b29176f34f75ebde02 100644
--- a/docs/api/python/topi.rst
+++ b/docs/api/python/topi.rst
@@ -1,11 +1,9 @@
-TVM Operator Inventory
-----------------------
+TOPI
+----
 .. automodule:: topi
 
-Index
-~~~~~
-
-**List of operators**
+List of operators
+~~~~~~~~~~~~~~~~~
 
 .. autosummary::
 
@@ -52,8 +50,8 @@ Index
    topi.broadcast_minimum
 
 
-**List of schedules**
-
+List of schedules
+~~~~~~~~~~~~~~~~~
 .. autosummary::
 
    topi.generic.schedule_conv2d_nchw
diff --git a/docs/api_links.rst b/docs/api_links.rst
index 9a55af1728b9028cd8463f186f13ff66077e7c01..d749a2440ac7cd7976bfbbb00ec7b7e59988dc8b 100644
--- a/docs/api_links.rst
+++ b/docs/api_links.rst
@@ -1,5 +1,5 @@
-Links to API References
-=======================
+Links to C++/JS API References
+==============================
 
 This page contains links to API references that are build with different doc build system.
 
diff --git a/docs/dev/index.rst b/docs/dev/index.rst
index 0d0ee852f6f83a735731402672f2aa789eafa93d..5056630fa4dbb7e5ac529981939a53bc99bd2ac0 100644
--- a/docs/dev/index.rst
+++ b/docs/dev/index.rst
@@ -1,11 +1,12 @@
 TVM Design and Developer Guide
 ==============================
 
-Building an IR stack for deep learning systems involves many
-many systems-level design decisions.
+Building a compiler stack for deep learning systems involves many many systems-level design decisions.
 In this part of documentation, we share the rationale for the specific choices made when designing TVM.
 
 .. toctree::
    :maxdepth: 2
 
    runtime
+   nnvm_json_spec
+   nnvm_overview
diff --git a/nnvm/docs/json_spec.rst b/docs/dev/nnvm_json_spec.rst
similarity index 100%
rename from nnvm/docs/json_spec.rst
rename to docs/dev/nnvm_json_spec.rst
diff --git a/nnvm/docs/dev/overview.md b/docs/dev/nnvm_overview.md
similarity index 100%
rename from nnvm/docs/dev/overview.md
rename to docs/dev/nnvm_overview.md
diff --git a/docs/how_to/deploy.md b/docs/how_to/deploy.md
index 841b9d54489debcfc1f50f5432fc19a9635fc2dc..68630adfaeaaefcb925219b5ff72b948961c9ca4 100644
--- a/docs/how_to/deploy.md
+++ b/docs/how_to/deploy.md
@@ -1,5 +1,5 @@
-How to Deploy TVM Modules
-=========================
+How to Deploy Compiled Modules
+==============================
 We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy)
 
 To run the example, you can use the following command
@@ -59,3 +59,124 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
 
 Refer [here](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target.
 From android java TVM API to load model & execute can be refered at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source.
+
+
+Deploy NNVM Modules
+-------------------
+NNVM compiled modules are fully embedded in TVM runtime as long as ```GRAPH_RUNTIME``` option
+is enabled in tvm runtime. Check out the [TVM documentation](http://docs.tvmlang.org/) for
+how to deploy TVM runtime to your system.
+
+In a nutshell, we will need three items to deploy a compiled module.
+Checkout our tutorials on getting started with NNVM compiler for more details.
+
+- The graph json data which contains the execution graph.
+- The tvm module library of compiled functions.
+- The parameter blobs for stored parameters.
+
+We can then use TVM's runtime API to deploy the compiled module.
+Here is an example in python.
+
+```python
+import tvm
+
+# tvm module for compiled functions.
+loaded_lib = tvm.module.load("deploy.so")
+# json graph
+loaded_json = open(temp.relpath("deploy.json")).read()
+# parameters in binary
+loaded_params = bytearray(open(temp.relpath("deploy.params"), "rb").read())
+
+fcreate = tvm.get_global_func("tvm.graph_runtime.create")
+ctx = tvm.gpu(0)
+gmodule = fcreate(loaded_json, loaded_lib, ctx.device_type, ctx.device_id)
+set_input, get_output, run = gmodule["set_input"], gmodule["get_output"], gmodule["run"]
+set_input("x", tvm.nd.array(x_np))
+gmodule["load_params"](loaded_params)
+run()
+out = tvm.nd.empty(shape)
+get_output(0, out)
+print(out.asnumpy())
+```
+
+An example in c++.
+```cpp
+#include <dlpack/dlpack.h>
+#include <tvm/runtime/module.h>
+#include <tvm/runtime/registry.h>
+#include <tvm/runtime/packed_func.h>
+
+#include <fstream>
+#include <iterator>
+#include <algorithm>
+
+int main()
+{
+    // tvm module for compiled functions
+    tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile("deploy.so");
+
+    // json graph
+    std::ifstream json_in("deploy.json", std::ios::in);
+    std::string json_data((std::istreambuf_iterator<char>(json_in)), std::istreambuf_iterator<char>());
+    json_in.close();
+
+    // parameters in binary
+    std::ifstream params_in("deploy.params", std::ios::binary);
+    std::string params_data((std::istreambuf_iterator<char>(params_in)), std::istreambuf_iterator<char>());
+    params_in.close();
+
+    // parameters need to be TVMByteArray type to indicate the binary data
+    TVMByteArray params_arr;
+    params_arr.data = params_data.c_str();
+    params_arr.size = params_data.length();
+
+    int dtype_code = kDLFloat;
+    int dtype_bits = 32;
+    int dtype_lanes = 1;
+    int device_type = kDLCPU;
+    int device_id = 0;
+
+    // get global function module for graph runtime
+    tvm::runtime::Module mod = (*tvm::runtime::Registry::Get("tvm.graph_runtime.create"))(json_data, mod_syslib, device_type, device_id);
+
+    DLTensor* x;
+    int in_ndim = 4;
+    int64_t in_shape[4] = {1, 3, 224, 224};
+    TVMArrayAlloc(in_shape, in_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &x);
+    // load image data saved in binary
+    std::ifstream data_fin("cat.bin", std::ios::binary);
+    data_fin.read(static_cast<char*>(x->data), 3 * 224 * 224 * 4);
+
+    // get the function from the module(set input data)
+    tvm::runtime::PackedFunc set_input = mod.GetFunction("set_input");
+    set_input("data", x);
+
+    // get the function from the module(load patameters)
+    tvm::runtime::PackedFunc load_params = mod.GetFunction("load_params");
+    load_params(params_arr);
+
+    // get the function from the module(run it)
+    tvm::runtime::PackedFunc run = mod.GetFunction("run");
+    run();
+
+    DLTensor* y;
+    int out_ndim = 1;
+    int64_t out_shape[1] = {1000, };
+    TVMArrayAlloc(out_shape, out_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &y);
+
+    // get the function from the module(get output data)
+    tvm::runtime::PackedFunc get_output = mod.GetFunction("get_output");
+    get_output(0, y);
+
+    // get the maximum position in output vector
+    auto y_iter = static_cast<float*>(y->data);
+    auto max_iter = std::max_element(y_iter, y_iter + 1000);
+    auto max_index = std::distance(y_iter, max_iter);
+    std::cout << "The maximum position in output vector is: " << max_index << std::endl;
+
+    TVMArrayFree(x);
+    TVMArrayFree(y);
+
+    return 0;
+}
+```
diff --git a/docs/how_to/install.md b/docs/how_to/install.md
index 6e81c04075504b67252ad94c03407f3e1f96416f..c6210e802cab75316011263352e0cca7cc9983b4 100644
--- a/docs/how_to/install.md
+++ b/docs/how_to/install.md
@@ -22,37 +22,20 @@ git submodule update
 
 ## Build the Shared Library
 
-Our goal is to build the shared library:
-- On Linux/OSX the target library is `libtvm.so`
-- On Windows the target library is `libtvm.dll`
-
-The minimal building requirement is
-- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
-
-You can edit `make/config.mk` to change the compile options, and then build by
-`make`. If everything goes well, we can go to the specific language installation section.
-
-### Building on Windows
-
-TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**. In order to generate the VS solution file using cmake,
-make sure you have a recent version of cmake added to your path and then from the tvm directory:
-
-```bash
-mkdir build
-cd build
-cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
-```
-This will generate the VS project using the MSVC 14 64 bit generator. Open the .sln file in the build directory and build with Visual Studio.
-
-### Customized Building
-
-Install prerequisites first:
+Our goal is to build the shared libraries:
+- On Linux/OSX the target library are `libtvm.so, libtvm_topi.so`
+- On Windows the target library are `libtvm.dll, libtvm_topi.dll`
 
 ```bash
 sudo apt-get update
 sudo apt-get install -y python python-dev python-setuptools gcc libtinfo-dev zlib1g-dev
 ```
 
+The minimal building requirement is
+- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
+- We highly recommend to build with LLVM to enable all the features.
+- It is possible to build without llvm dependency if we only want to use CUDA/OpenCL
+
 The configuration of tvm can be modified by ```config.mk```
 - First copy ```make/config.mk``` to the project root, on which
   any local modification will be ignored by git, then modify the according flags.
@@ -62,8 +45,36 @@ The configuration of tvm can be modified by ```config.mk```
     [LLVM Download Page](http://releases.llvm.org/download.html).
     - Unzip to a certain location, modify ```config.mk``` to add ```LLVM_CONFIG=/path/to/your/llvm/bin/llvm-config```
   - You can also use [LLVM Nightly Ubuntu Build](https://apt.llvm.org/)
-    - Note that apt-package append ```llvm-config``` with version number. For example, set ```LLVM_CONFIG=llvm-config-4.0``` if you installed 4.0 package
-  - By default CUDA and OpenCL code generator do not require llvm.
+    - Note that apt-package append ```llvm-config``` with version number.
+      For example, set ```LLVM_CONFIG=llvm-config-4.0``` if you installed 4.0 package
+
+We can then build tvm by `make`.
+After we build the tvm, we can proceed to build nnvm using the following script.
+
+```bash
+cd nnvm
+make -j4
+```
+
+This will creates `libnnvm_compiler.so` under the `nnvm/lib` folder.
+If everything goes well, we can go to the specific language installation section.
+
+
+### Building on Windows
+
+TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**.
+In order to generate the VS solution file using cmake,
+make sure you have a recent version of cmake added to your path and then from the tvm directory:
+
+```bash
+mkdir build
+cd build
+cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
+```
+This will generate the VS project using the MSVC 14 64 bit generator.
+Open the .sln file in the build directory and build with Visual Studio.
+In order to build with LLVM in windows, you will need to build LLVM from source.
+You need to run build the nnvm by running the same script under the nnvm folder.
 
 ## Python Package Installation
 
@@ -77,7 +88,7 @@ There are several ways to install the package:
     The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ```setup``` again)
 
     ```bash
-    export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:${PYTHONPATH}
+    export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:/path/to/tvm/nnvm/python:${PYTHONPATH}
     ```
 
 2. Install tvm python bindings by `setup.py`:
@@ -89,4 +100,5 @@ There are several ways to install the package:
     #       providing --user flag may trigger error during installation in such case.
     cd python; python setup.py install --user; cd ..
     cd topi/python; python setup.py install --user; cd ../..
+    cd nnvm/python; python setup.py install --user; cd ../..
     ```
diff --git a/docs/how_to/nnpack.md b/docs/how_to/nnpack.md
index 060bf9b5399fea730053723827ebf338f13476fb..d271af86b35a4f4a941973f5b3809af2dc87df99 100644
--- a/docs/how_to/nnpack.md
+++ b/docs/how_to/nnpack.md
@@ -1,10 +1,11 @@
 ### NNPACK for Multi-Core CPU Support in TVM
+
 [NNPACK](https://github.com/Maratyszcza/NNPACK) is an acceleration package
 for neural network computations, which can run on x86-64, ARMv7, or ARM64 architecture CPUs.
 Using NNPACK, higher-level libraries like _MXNet_ can speed up
 the execution on multi-core CPU computers, including laptops and mobile devices.
 
-***Note***: AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose. 
+***Note***: AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose.
 For regular use prefer native tuned TVM implementation.
 
 _TVM_ supports NNPACK for forward propagation (inference only) in convolution, max-pooling, and fully-connected layers.
@@ -29,7 +30,7 @@ The following table explains under which conditions NNPACK will work.
 ### Build/Install LLVM
 LLVM is required for CPU codegen that needs LLVM.
 Since LLVM takes long time to build from source, you can download pre-built version of LLVM from [LLVM Download Page](http://releases.llvm.org/download.html).
-For llvm 4.0 you can do the following step : 
+For llvm 4.0 you can do the following step :
 
 ```bash
 # Add llvm repository in apt source list
@@ -63,7 +64,7 @@ apt-get install -y \
 
 If the trained model meets some conditions of using NNPACK,
 you can build TVM with NNPACK support.
-Follow these simple steps:  
+Follow these simple steps:
 * Build NNPACK shared library with the following commands. _TVM_ will link NNPACK dynamically.
 
 Note: The following NNPACK installation instructions have been tested on Ubuntu 16.04.
@@ -77,7 +78,7 @@ cd ninja
 ./configure.py --bootstrap
 ```
 
-Set the environment variable PATH to tell bash where to find the ninja executable. For example, assume we cloned ninja on the home directory ~. then we can added the following line in ~/.bashrc. 
+Set the environment variable PATH to tell bash where to find the ninja executable. For example, assume we cloned ninja on the home directory ~. then we can added the following line in ~/.bashrc.
 ```bash
 export PATH="${PATH}:~/ninja"
 ```
@@ -118,27 +119,4 @@ after configuration use `make` to build TVM
 
 ```bash
 make
-make install
-```
-
-#### Python Package Installation
-
-The python package for [tvm](https://github.com/dmlc/tvm) depends of [topi](https://github.com/dmlc/tvm/tree/master/topi).
-The tvm python package is located at `tvm/python` and topi python package is located in `tvm/topi/python` folder.
-There are several ways to install the package, in all these cases the TVM library and TOPI must be present in the python env:
-
-1. Set the environment variable PYTHONPATH to tell python where to find the libraries. For example, assume we cloned tvm on the home directory ~. then we can added the following line in ~/.bashrc. It is recommended for developers who may change the codes. The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call setup again)
-
-```bash
-export PYTHONPATH=/path/to/tvm/python:/path/to/tvm/topi/python:${PYTHONPATH}
-```
-
-2. Install tvm and topi python bindings by setup.py:
-
-```bash
-# install tvm package for the current user
-cd topi/python
-python setup.py install --user; 
-cd ../../python
-python setup.py install --user; 
 ```
diff --git a/docs/index.rst b/docs/index.rst
index 3a5c8d7715c94e5d1deabfd07bb59da6d5b1a102..28f6b4052501343d347ef583af8c5ade24a2c404 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -1,23 +1,38 @@
 TVM Documentation
 =================
 
-Welcome to TVM documentation.
-
-
-Contents
---------
-
+Get Started
+-----------
 .. toctree::
    :maxdepth: 1
 
-   self
    how_to/install
    tutorials/index
-   faq
    how_to/deploy
    how_to/integrate
    how_to/contribute
+   faq
+
+API Reference
+-------------
+.. toctree::
+   :maxdepth: 2
+
    api/python/index
-   dev/index
    api_links
+
+Developer Guide
+---------------
+.. toctree::
+   :maxdepth: 2
+
+   dev/index
+   nnvm_top
+
+
+Index
+-----
+.. toctree::
+   :maxdepth: 1
+
    genindex
diff --git a/nnvm/docs/top.rst b/docs/nnvm_top.rst
similarity index 99%
rename from nnvm/docs/top.rst
rename to docs/nnvm_top.rst
index 11d0f6093f83411f7da4dc3273163e5b3fcdb909..3dccaddff8cdb231aa7f38c8df61ed58780213e3 100644
--- a/nnvm/docs/top.rst
+++ b/docs/nnvm_top.rst
@@ -1,5 +1,5 @@
-Core Tensor Operators
-=====================
+NNVM Core Tensor Operators
+==========================
 
 This page contains the list of core tensor operator primitives pre-defined in NNVM.
 The core tensor operator primitives(``nnvm.top``) covers typical workloads in deep learning.
diff --git a/nnvm/docs/.gitignore b/nnvm/docs/.gitignore
deleted file mode 100644
index d5d02112742530cd029dddc8655ae7cda2df41c9..0000000000000000000000000000000000000000
--- a/nnvm/docs/.gitignore
+++ /dev/null
@@ -1,4 +0,0 @@
-doxygen
-_build
-gen_modules
-tutorials
diff --git a/nnvm/docs/Doxyfile b/nnvm/docs/Doxyfile
deleted file mode 100644
index 4c6c2493aa625c8333f1ed91ca7573351117019f..0000000000000000000000000000000000000000
--- a/nnvm/docs/Doxyfile
+++ /dev/null
@@ -1,2353 +0,0 @@
-# Doxyfile 1.8.8
-
-# This file describes the settings to be used by the documentation system
-# doxygen (www.doxygen.org) for a project.
-#
-# All text after a double hash (##) is considered a comment and is placed in
-# front of the TAG it is preceding.
-#
-# All text after a single hash (#) is considered a comment and will be ignored.
-# The format is:
-# TAG = value [value, ...]
-# For lists, items can also be appended using:
-# TAG += value [value, ...]
-# Values that contain spaces should be placed between quotes (\" \").
-
-#---------------------------------------------------------------------------
-# Project related configuration options
-#---------------------------------------------------------------------------
-
-# This tag specifies the encoding used for all characters in the config file
-# that follow. The default is UTF-8 which is also the encoding used for all text
-# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
-# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv
-# for the list of possible encodings.
-# The default value is: UTF-8.
-
-DOXYFILE_ENCODING      = UTF-8
-
-# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
-# double-quotes, unless you are using Doxywizard) that should identify the
-# project for which the documentation is generated. This name is used in the
-# title of most generated pages and in a few other places.
-# The default value is: My Project.
-
-PROJECT_NAME           = "nnvm"
-
-# The PROJECT_NUMBER tag can be used to enter a project or revision number. This
-# could be handy for archiving the generated documentation or if some version
-# control system is used.
-
-PROJECT_NUMBER         =
-
-# Using the PROJECT_BRIEF tag one can provide an optional one line description
-# for a project that appears at the top of each page and should give viewer a
-# quick idea about the purpose of the project. Keep the description short.
-
-PROJECT_BRIEF          =
-
-# With the PROJECT_LOGO tag one can specify an logo or icon that is included in
-# the documentation. The maximum height of the logo should not exceed 55 pixels
-# and the maximum width should not exceed 200 pixels. Doxygen will copy the logo
-# to the output directory.
-
-PROJECT_LOGO           =
-
-# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
-# into which the generated documentation will be written. If a relative path is
-# entered, it will be relative to the location where doxygen was started. If
-# left blank the current directory will be used.
-
-OUTPUT_DIRECTORY       = docs/doxygen
-
-# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 4096 sub-
-# directories (in 2 levels) under the output directory of each output format and
-# will distribute the generated files over these directories. Enabling this
-# option can be useful when feeding doxygen a huge amount of source files, where
-# putting all generated files in the same directory would otherwise causes
-# performance problems for the file system.
-# The default value is: NO.
-
-CREATE_SUBDIRS         = NO
-
-# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII
-# characters to appear in the names of generated files. If set to NO, non-ASCII
-# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode
-# U+3044.
-# The default value is: NO.
-
-#ALLOW_UNICODE_NAMES    = NO
-
-# The OUTPUT_LANGUAGE tag is used to specify the language in which all
-# documentation generated by doxygen is written. Doxygen will use this
-# information to generate all constant output in the proper language.
-# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
-# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
-# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
-# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
-# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
-# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
-# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
-# Ukrainian and Vietnamese.
-# The default value is: English.
-
-OUTPUT_LANGUAGE        = English
-
-# If the BRIEF_MEMBER_DESC tag is set to YES doxygen will include brief member
-# descriptions after the members that are listed in the file and class
-# documentation (similar to Javadoc). Set to NO to disable this.
-# The default value is: YES.
-
-BRIEF_MEMBER_DESC      = YES
-
-# If the REPEAT_BRIEF tag is set to YES doxygen will prepend the brief
-# description of a member or function before the detailed description
-#
-# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
-# brief descriptions will be completely suppressed.
-# The default value is: YES.
-
-REPEAT_BRIEF           = YES
-
-# This tag implements a quasi-intelligent brief description abbreviator that is
-# used to form the text in various listings. Each string in this list, if found
-# as the leading text of the brief description, will be stripped from the text
-# and the result, after processing the whole list, is used as the annotated
-# text. Otherwise, the brief description is used as-is. If left blank, the
-# following values are used ($name is automatically replaced with the name of
-# the entity):The $name class, The $name widget, The $name file, is, provides,
-# specifies, contains, represents, a, an and the.
-
-ABBREVIATE_BRIEF       =
-
-# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
-# doxygen will generate a detailed section even if there is only a brief
-# description.
-# The default value is: NO.
-
-ALWAYS_DETAILED_SEC    = NO
-
-# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
-# inherited members of a class in the documentation of that class as if those
-# members were ordinary class members. Constructors, destructors and assignment
-# operators of the base classes will not be shown.
-# The default value is: NO.
-
-INLINE_INHERITED_MEMB  = NO
-
-# If the FULL_PATH_NAMES tag is set to YES doxygen will prepend the full path
-# before files name in the file list and in the header files. If set to NO the
-# shortest path that makes the file name unique will be used
-# The default value is: YES.
-
-FULL_PATH_NAMES        = YES
-
-# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
-# Stripping is only done if one of the specified strings matches the left-hand
-# part of the path. The tag can be used to show relative paths in the file list.
-# If left blank the directory from which doxygen is run is used as the path to
-# strip.
-#
-# Note that you can specify absolute paths here, but also relative paths, which
-# will be relative from the directory where doxygen is started.
-# This tag requires that the tag FULL_PATH_NAMES is set to YES.
-
-STRIP_FROM_PATH        =
-
-# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
-# path mentioned in the documentation of a class, which tells the reader which
-# header file to include in order to use a class. If left blank only the name of
-# the header file containing the class definition is used. Otherwise one should
-# specify the list of include paths that are normally passed to the compiler
-# using the -I flag.
-
-STRIP_FROM_INC_PATH    =
-
-# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
-# less readable) file names. This can be useful is your file systems doesn't
-# support long names like on DOS, Mac, or CD-ROM.
-# The default value is: NO.
-
-SHORT_NAMES            = NO
-
-# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
-# first line (until the first dot) of a Javadoc-style comment as the brief
-# description. If set to NO, the Javadoc-style will behave just like regular Qt-
-# style comments (thus requiring an explicit @brief command for a brief
-# description.)
-# The default value is: NO.
-
-JAVADOC_AUTOBRIEF      = NO
-
-# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
-# line (until the first dot) of a Qt-style comment as the brief description. If
-# set to NO, the Qt-style will behave just like regular Qt-style comments (thus
-# requiring an explicit \brief command for a brief description.)
-# The default value is: NO.
-
-QT_AUTOBRIEF           = NO
-
-# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
-# multi-line C++ special comment block (i.e. a block of //! or /// comments) as
-# a brief description. This used to be the default behavior. The new default is
-# to treat a multi-line C++ comment block as a detailed description. Set this
-# tag to YES if you prefer the old behavior instead.
-#
-# Note that setting this tag to YES also means that rational rose comments are
-# not recognized any more.
-# The default value is: NO.
-
-MULTILINE_CPP_IS_BRIEF = NO
-
-# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
-# documentation from any documented member that it re-implements.
-# The default value is: YES.
-
-INHERIT_DOCS           = YES
-
-# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce a
-# new page for each member. If set to NO, the documentation of a member will be
-# part of the file/class/namespace that contains it.
-# The default value is: NO.
-
-SEPARATE_MEMBER_PAGES  = NO
-
-# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
-# uses this value to replace tabs by spaces in code fragments.
-# Minimum value: 1, maximum value: 16, default value: 4.
-
-TAB_SIZE               = 8
-
-# This tag can be used to specify a number of aliases that act as commands in
-# the documentation. An alias has the form:
-# name=value
-# For example adding
-# "sideeffect=@par Side Effects:\n"
-# will allow you to put the command \sideeffect (or @sideeffect) in the
-# documentation, which will result in a user-defined paragraph with heading
-# "Side Effects:". You can put \n's in the value part of an alias to insert
-# newlines.
-
-ALIASES                =
-
-# This tag can be used to specify a number of word-keyword mappings (TCL only).
-# A mapping has the form "name=value". For example adding "class=itcl::class"
-# will allow you to use the command class in the itcl::class meaning.
-
-TCL_SUBST              =
-
-# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
-# only. Doxygen will then generate output that is more tailored for C. For
-# instance, some of the names that are used will be different. The list of all
-# members will be omitted, etc.
-# The default value is: NO.
-
-OPTIMIZE_OUTPUT_FOR_C  = NO
-
-# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
-# Python sources only. Doxygen will then generate output that is more tailored
-# for that language. For instance, namespaces will be presented as packages,
-# qualified scopes will look different, etc.
-# The default value is: NO.
-
-OPTIMIZE_OUTPUT_JAVA   = NO
-
-# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
-# sources. Doxygen will then generate output that is tailored for Fortran.
-# The default value is: NO.
-
-OPTIMIZE_FOR_FORTRAN   = NO
-
-# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
-# sources. Doxygen will then generate output that is tailored for VHDL.
-# The default value is: NO.
-
-OPTIMIZE_OUTPUT_VHDL   = NO
-
-# Doxygen selects the parser to use depending on the extension of the files it
-# parses. With this tag you can assign which parser to use for a given
-# extension. Doxygen has a built-in mapping, but you can override or extend it
-# using this tag. The format is ext=language, where ext is a file extension, and
-# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
-# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
-# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
-# Fortran. In the later case the parser tries to guess whether the code is fixed
-# or free formatted code, this is the default for Fortran type files), VHDL. For
-# instance to make doxygen treat .inc files as Fortran files (default is PHP),
-# and .f files as C (default is Fortran), use: inc=Fortran f=C.
-#
-# Note For files without extension you can use no_extension as a placeholder.
-#
-# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
-# the files are not read by doxygen.
-
-EXTENSION_MAPPING      =
-
-# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
-# according to the Markdown format, which allows for more readable
-# documentation. See http://daringfireball.net/projects/markdown/ for details.
-# The output of markdown processing is further processed by doxygen, so you can
-# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
-# case of backward compatibilities issues.
-# The default value is: YES.
-
-#MARKDOWN_SUPPORT       = YES
-
-# When enabled doxygen tries to link words that correspond to documented
-# classes, or namespaces to their corresponding documentation. Such a link can
-# be prevented in individual cases by by putting a % sign in front of the word
-# or globally by setting AUTOLINK_SUPPORT to NO.
-# The default value is: YES.
-
-#AUTOLINK_SUPPORT       = YES
-
-# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
-# to include (a tag file for) the STL sources as input, then you should set this
-# tag to YES in order to let doxygen match functions declarations and
-# definitions whose arguments contain STL classes (e.g. func(std::string);
-# versus func(std::string) {}). This also make the inheritance and collaboration
-# diagrams that involve STL classes more complete and accurate.
-# The default value is: NO.
-
-BUILTIN_STL_SUPPORT    = NO
-
-# If you use Microsoft's C++/CLI language, you should set this option to YES to
-# enable parsing support.
-# The default value is: NO.
-
-CPP_CLI_SUPPORT        = NO
-
-# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
-# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen
-# will parse them like normal C++ but will assume all classes use public instead
-# of private inheritance when no explicit protection keyword is present.
-# The default value is: NO.
-
-SIP_SUPPORT            = NO
-
-# For Microsoft's IDL there are propget and propput attributes to indicate
-# getter and setter methods for a property. Setting this option to YES will make
-# doxygen to replace the get and set methods by a property in the documentation.
-# This will only work if the methods are indeed getting or setting a simple
-# type. If this is not the case, or you want to show the methods anyway, you
-# should set this option to NO.
-# The default value is: YES.
-
-IDL_PROPERTY_SUPPORT   = YES
-
-# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
-# tag is set to YES, then doxygen will reuse the documentation of the first
-# member in the group (if any) for the other members of the group. By default
-# all members of a group must be documented explicitly.
-# The default value is: NO.
-
-DISTRIBUTE_GROUP_DOC   = NO
-
-# Set the SUBGROUPING tag to YES to allow class member groups of the same type
-# (for instance a group of public functions) to be put as a subgroup of that
-# type (e.g. under the Public Functions section). Set it to NO to prevent
-# subgrouping. Alternatively, this can be done per class using the
-# \nosubgrouping command.
-# The default value is: YES.
-
-SUBGROUPING            = YES
-
-# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
-# are shown inside the group in which they are included (e.g. using \ingroup)
-# instead of on a separate page (for HTML and Man pages) or section (for LaTeX
-# and RTF).
-#
-# Note that this feature does not work in combination with
-# SEPARATE_MEMBER_PAGES.
-# The default value is: NO.
-
-INLINE_GROUPED_CLASSES = NO
-
-# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
-# with only public data fields or simple typedef fields will be shown inline in
-# the documentation of the scope in which they are defined (i.e. file,
-# namespace, or group documentation), provided this scope is documented. If set
-# to NO, structs, classes, and unions are shown on a separate page (for HTML and
-# Man pages) or section (for LaTeX and RTF).
-# The default value is: NO.
-
-INLINE_SIMPLE_STRUCTS  = NO
-
-# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
-# enum is documented as struct, union, or enum with the name of the typedef. So
-# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
-# with name TypeT. When disabled the typedef will appear as a member of a file,
-# namespace, or class. And the struct will be named TypeS. This can typically be
-# useful for C code in case the coding convention dictates that all compound
-# types are typedef'ed and only the typedef is referenced, never the tag name.
-# The default value is: NO.
-
-TYPEDEF_HIDES_STRUCT   = NO
-
-# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
-# cache is used to resolve symbols given their name and scope. Since this can be
-# an expensive process and often the same symbol appears multiple times in the
-# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
-# doxygen will become slower. If the cache is too large, memory is wasted. The
-# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
-# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
-# symbols. At the end of a run doxygen will report the cache usage and suggest
-# the optimal cache size from a speed point of view.
-# Minimum value: 0, maximum value: 9, default value: 0.
-
-LOOKUP_CACHE_SIZE      = 0
-
-#---------------------------------------------------------------------------
-# Build related configuration options
-#---------------------------------------------------------------------------
-
-# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in
-# documentation are documented, even if no documentation was available. Private
-# class members and static file members will be hidden unless the
-# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
-# Note: This will also disable the warnings about undocumented members that are
-# normally produced when WARNINGS is set to YES.
-# The default value is: NO.
-
-EXTRACT_ALL            = YES
-
-# If the EXTRACT_PRIVATE tag is set to YES all private members of a class will
-# be included in the documentation.
-# The default value is: NO.
-
-EXTRACT_PRIVATE        = NO
-
-# If the EXTRACT_PACKAGE tag is set to YES all members with package or internal
-# scope will be included in the documentation.
-# The default value is: NO.
-
-#EXTRACT_PACKAGE        = NO
-
-# If the EXTRACT_STATIC tag is set to YES all static members of a file will be
-# included in the documentation.
-# The default value is: NO.
-
-EXTRACT_STATIC         = NO
-
-# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) defined
-# locally in source files will be included in the documentation. If set to NO
-# only classes defined in header files are included. Does not have any effect
-# for Java sources.
-# The default value is: YES.
-
-EXTRACT_LOCAL_CLASSES  = YES
-
-# This flag is only useful for Objective-C code. When set to YES local methods,
-# which are defined in the implementation section but not in the interface are
-# included in the documentation. If set to NO only methods in the interface are
-# included.
-# The default value is: NO.
-
-EXTRACT_LOCAL_METHODS  = NO
-
-# If this flag is set to YES, the members of anonymous namespaces will be
-# extracted and appear in the documentation as a namespace called
-# 'anonymous_namespace{file}', where file will be replaced with the base name of
-# the file that contains the anonymous namespace. By default anonymous namespace
-# are hidden.
-# The default value is: NO.
-
-EXTRACT_ANON_NSPACES   = NO
-
-# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
-# undocumented members inside documented classes or files. If set to NO these
-# members will be included in the various overviews, but no documentation
-# section is generated. This option has no effect if EXTRACT_ALL is enabled.
-# The default value is: NO.
-
-HIDE_UNDOC_MEMBERS     = NO
-
-# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
-# undocumented classes that are normally visible in the class hierarchy. If set
-# to NO these classes will be included in the various overviews. This option has
-# no effect if EXTRACT_ALL is enabled.
-# The default value is: NO.
-
-HIDE_UNDOC_CLASSES     = NO
-
-# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
-# (class|struct|union) declarations. If set to NO these declarations will be
-# included in the documentation.
-# The default value is: NO.
-
-HIDE_FRIEND_COMPOUNDS  = NO
-
-# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
-# documentation blocks found inside the body of a function. If set to NO these
-# blocks will be appended to the function's detailed documentation block.
-# The default value is: NO.
-
-HIDE_IN_BODY_DOCS      = NO
-
-# The INTERNAL_DOCS tag determines if documentation that is typed after a
-# \internal command is included. If the tag is set to NO then the documentation
-# will be excluded. Set it to YES to include the internal documentation.
-# The default value is: NO.
-
-INTERNAL_DOCS          = NO
-
-# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
-# names in lower-case letters. If set to YES upper-case letters are also
-# allowed. This is useful if you have classes or files whose names only differ
-# in case and if your file system supports case sensitive file names. Windows
-# and Mac users are advised to set this option to NO.
-# The default value is: system dependent.
-
-CASE_SENSE_NAMES       = YES
-
-# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
-# their full class and namespace scopes in the documentation. If set to YES the
-# scope will be hidden.
-# The default value is: NO.
-
-HIDE_SCOPE_NAMES       = NO
-
-# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
-# the files that are included by a file in the documentation of that file.
-# The default value is: YES.
-
-SHOW_INCLUDE_FILES     = YES
-
-# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
-# grouped member an include statement to the documentation, telling the reader
-# which file to include in order to use the member.
-# The default value is: NO.
-
-#SHOW_GROUPED_MEMB_INC  = NO
-
-# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
-# files with double quotes in the documentation rather than with sharp brackets.
-# The default value is: NO.
-
-FORCE_LOCAL_INCLUDES   = NO
-
-# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
-# documentation for inline members.
-# The default value is: YES.
-
-INLINE_INFO            = YES
-
-# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
-# (detailed) documentation of file and class members alphabetically by member
-# name. If set to NO the members will appear in declaration order.
-# The default value is: YES.
-
-SORT_MEMBER_DOCS       = YES
-
-# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
-# descriptions of file, namespace and class members alphabetically by member
-# name. If set to NO the members will appear in declaration order. Note that
-# this will also influence the order of the classes in the class list.
-# The default value is: NO.
-
-SORT_BRIEF_DOCS        = NO
-
-# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
-# (brief and detailed) documentation of class members so that constructors and
-# destructors are listed first. If set to NO the constructors will appear in the
-# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
-# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
-# member documentation.
-# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
-# detailed member documentation.
-# The default value is: NO.
-
-SORT_MEMBERS_CTORS_1ST = NO
-
-# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
-# of group names into alphabetical order. If set to NO the group names will
-# appear in their defined order.
-# The default value is: NO.
-
-SORT_GROUP_NAMES       = NO
-
-# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
-# fully-qualified names, including namespaces. If set to NO, the class list will
-# be sorted only by class name, not including the namespace part.
-# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
-# Note: This option applies only to the class list, not to the alphabetical
-# list.
-# The default value is: NO.
-
-SORT_BY_SCOPE_NAME     = NO
-
-# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
-# type resolution of all parameters of a function it will reject a match between
-# the prototype and the implementation of a member function even if there is
-# only one candidate or it is obvious which candidate to choose by doing a
-# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
-# accept a match between prototype and implementation in such cases.
-# The default value is: NO.
-
-STRICT_PROTO_MATCHING  = NO
-
-# The GENERATE_TODOLIST tag can be used to enable ( YES) or disable ( NO) the
-# todo list. This list is created by putting \todo commands in the
-# documentation.
-# The default value is: YES.
-
-GENERATE_TODOLIST      = YES
-
-# The GENERATE_TESTLIST tag can be used to enable ( YES) or disable ( NO) the
-# test list. This list is created by putting \test commands in the
-# documentation.
-# The default value is: YES.
-
-GENERATE_TESTLIST      = YES
-
-# The GENERATE_BUGLIST tag can be used to enable ( YES) or disable ( NO) the bug
-# list. This list is created by putting \bug commands in the documentation.
-# The default value is: YES.
-
-GENERATE_BUGLIST       = YES
-
-# The GENERATE_DEPRECATEDLIST tag can be used to enable ( YES) or disable ( NO)
-# the deprecated list. This list is created by putting \deprecated commands in
-# the documentation.
-# The default value is: YES.
-
-GENERATE_DEPRECATEDLIST= YES
-
-# The ENABLED_SECTIONS tag can be used to enable conditional documentation
-# sections, marked by \if <section_label> ... \endif and \cond <section_label>
-# ... \endcond blocks.
-
-ENABLED_SECTIONS       =
-
-# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
-# initial value of a variable or macro / define can have for it to appear in the
-# documentation. If the initializer consists of more lines than specified here
-# it will be hidden. Use a value of 0 to hide initializers completely. The
-# appearance of the value of individual variables and macros / defines can be
-# controlled using \showinitializer or \hideinitializer command in the
-# documentation regardless of this setting.
-# Minimum value: 0, maximum value: 10000, default value: 30.
-
-MAX_INITIALIZER_LINES  = 30
-
-# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
-# the bottom of the documentation of classes and structs. If set to YES the list
-# will mention the files that were used to generate the documentation.
-# The default value is: YES.
-
-SHOW_USED_FILES        = YES
-
-# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
-# will remove the Files entry from the Quick Index and from the Folder Tree View
-# (if specified).
-# The default value is: YES.
-
-SHOW_FILES             = YES
-
-# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
-# page. This will remove the Namespaces entry from the Quick Index and from the
-# Folder Tree View (if specified).
-# The default value is: YES.
-
-SHOW_NAMESPACES        = YES
-
-# The FILE_VERSION_FILTER tag can be used to specify a program or script that
-# doxygen should invoke to get the current version for each file (typically from
-# the version control system). Doxygen will invoke the program by executing (via
-# popen()) the command command input-file, where command is the value of the
-# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
-# by doxygen. Whatever the program writes to standard output is used as the file
-# version. For an example see the documentation.
-
-FILE_VERSION_FILTER    =
-
-# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
-# by doxygen. The layout file controls the global structure of the generated
-# output files in an output format independent way. To create the layout file
-# that represents doxygen's defaults, run doxygen with the -l option. You can
-# optionally specify a file name after the option, if omitted DoxygenLayout.xml
-# will be used as the name of the layout file.
-#
-# Note that if you run doxygen from a directory containing a file called
-# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
-# tag is left empty.
-
-LAYOUT_FILE            =
-
-# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
-# the reference definitions. This must be a list of .bib files. The .bib
-# extension is automatically appended if omitted. This requires the bibtex tool
-# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.
-# For LaTeX the style of the bibliography can be controlled using
-# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
-# search path. See also \cite for info how to create references.
-
-CITE_BIB_FILES         =
-
-#---------------------------------------------------------------------------
-# Configuration options related to warning and progress messages
-#---------------------------------------------------------------------------
-
-# The QUIET tag can be used to turn on/off the messages that are generated to
-# standard output by doxygen. If QUIET is set to YES this implies that the
-# messages are off.
-# The default value is: NO.
-
-QUIET                  = NO
-
-# The WARNINGS tag can be used to turn on/off the warning messages that are
-# generated to standard error ( stderr) by doxygen. If WARNINGS is set to YES
-# this implies that the warnings are on.
-#
-# Tip: Turn warnings on while writing the documentation.
-# The default value is: YES.
-
-WARNINGS               = YES
-
-# If the WARN_IF_UNDOCUMENTED tag is set to YES, then doxygen will generate
-# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
-# will automatically be disabled.
-# The default value is: YES.
-
-WARN_IF_UNDOCUMENTED   = YES
-
-# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
-# potential errors in the documentation, such as not documenting some parameters
-# in a documented function, or documenting parameters that don't exist or using
-# markup commands wrongly.
-# The default value is: YES.
-
-WARN_IF_DOC_ERROR      = YES
-
-# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
-# are documented, but have no documentation for their parameters or return
-# value. If set to NO doxygen will only warn about wrong or incomplete parameter
-# documentation, but not about the absence of documentation.
-# The default value is: NO.
-
-WARN_NO_PARAMDOC       = YES
-
-# The WARN_FORMAT tag determines the format of the warning messages that doxygen
-# can produce. The string should contain the $file, $line, and $text tags, which
-# will be replaced by the file and line number from which the warning originated
-# and the warning text. Optionally the format may contain $version, which will
-# be replaced by the version of the file (if it could be obtained via
-# FILE_VERSION_FILTER)
-# The default value is: $file:$line: $text.
-
-WARN_FORMAT            = "$file:$line: $text"
-
-# The WARN_LOGFILE tag can be used to specify a file to which warning and error
-# messages should be written. If left blank the output is written to standard
-# error (stderr).
-
-WARN_LOGFILE           =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the input files
-#---------------------------------------------------------------------------
-
-# The INPUT tag is used to specify the files and/or directories that contain
-# documented source files. You may enter file names like myfile.cpp or
-# directories like /usr/src/myproject. Separate the files or directories with
-# spaces.
-# Note: If this tag is empty the current directory is searched.
-
-INPUT                  = include/nnvm
-
-# This tag can be used to specify the character encoding of the source files
-# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
-# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
-# documentation (see: http://www.gnu.org/software/libiconv) for the list of
-# possible encodings.
-# The default value is: UTF-8.
-
-INPUT_ENCODING         = UTF-8
-
-# If the value of the INPUT tag contains directories, you can use the
-# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
-# *.h) to filter out the source-files in the directories. If left blank the
-# following patterns are tested:*.c, *.cc, *.cxx, *.cpp, *.c++, *.java, *.ii,
-# *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, *.hh, *.hxx, *.hpp,
-# *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, *.m, *.markdown,
-# *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf,
-# *.qsf, *.as and *.js.
-
-FILE_PATTERNS          = *.h
-
-# The RECURSIVE tag can be used to specify whether or not subdirectories should
-# be searched for input files as well.
-# The default value is: NO.
-
-RECURSIVE              = YES
-
-# The EXCLUDE tag can be used to specify files and/or directories that should be
-# excluded from the INPUT source files. This way you can easily exclude a
-# subdirectory from a directory tree whose root is specified with the INPUT tag.
-#
-# Note that relative paths are relative to the directory from which doxygen is
-# run.
-
-EXCLUDE                =
-
-# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
-# directories that are symbolic links (a Unix file system feature) are excluded
-# from the input.
-# The default value is: NO.
-
-EXCLUDE_SYMLINKS       = NO
-
-# If the value of the INPUT tag contains directories, you can use the
-# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
-# certain files from those directories.
-#
-# Note that the wildcards are matched against the file with absolute path, so to
-# exclude all test directories for example use the pattern */test/*
-
-EXCLUDE_PATTERNS       = */test/* \
-                         logging.h
-
-# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
-# (namespaces, classes, functions, etc.) that should be excluded from the
-# output. The symbol name can be a fully qualified name, a word, or if the
-# wildcard * is used, a substring. Examples: ANamespace, AClass,
-# AClass::ANamespace, ANamespace::*Test
-#
-# Note that the wildcards are matched against the file with absolute path, so to
-# exclude all test directories use the pattern */test/*
-
-EXCLUDE_SYMBOLS        =
-
-# The EXAMPLE_PATH tag can be used to specify one or more files or directories
-# that contain example code fragments that are included (see the \include
-# command).
-
-EXAMPLE_PATH           =
-
-# If the value of the EXAMPLE_PATH tag contains directories, you can use the
-# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
-# *.h) to filter out the source-files in the directories. If left blank all
-# files are included.
-
-EXAMPLE_PATTERNS       =
-
-# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
-# searched for input files to be used with the \include or \dontinclude commands
-# irrespective of the value of the RECURSIVE tag.
-# The default value is: NO.
-
-EXAMPLE_RECURSIVE      = NO
-
-# The IMAGE_PATH tag can be used to specify one or more files or directories
-# that contain images that are to be included in the documentation (see the
-# \image command).
-
-IMAGE_PATH             =
-
-# The INPUT_FILTER tag can be used to specify a program that doxygen should
-# invoke to filter for each input file. Doxygen will invoke the filter program
-# by executing (via popen()) the command:
-#
-# <filter> <input-file>
-#
-# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the
-# name of an input file. Doxygen will then use the output that the filter
-# program writes to standard output. If FILTER_PATTERNS is specified, this tag
-# will be ignored.
-#
-# Note that the filter must not add or remove lines; it is applied before the
-# code is scanned, but not when the output code is generated. If lines are added
-# or removed, the anchors will not be placed correctly.
-
-INPUT_FILTER           =
-
-# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
-# basis. Doxygen will compare the file name with each pattern and apply the
-# filter if there is a match. The filters are a list of the form: pattern=filter
-# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
-# filters are used. If the FILTER_PATTERNS tag is empty or if none of the
-# patterns match the file name, INPUT_FILTER is applied.
-
-FILTER_PATTERNS        =
-
-# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
-# INPUT_FILTER ) will also be used to filter the input files that are used for
-# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
-# The default value is: NO.
-
-FILTER_SOURCE_FILES    = NO
-
-# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
-# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
-# it is also possible to disable source filtering for a specific pattern using
-# *.ext= (so without naming a filter).
-# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
-
-FILTER_SOURCE_PATTERNS =
-
-# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
-# is part of the input, its contents will be placed on the main page
-# (index.html). This can be useful if you have a project on for instance GitHub
-# and want to reuse the introduction page also for the doxygen output.
-
-#USE_MDFILE_AS_MAINPAGE =
-
-#---------------------------------------------------------------------------
-# Configuration options related to source browsing
-#---------------------------------------------------------------------------
-
-# If the SOURCE_BROWSER tag is set to YES then a list of source files will be
-# generated. Documented entities will be cross-referenced with these sources.
-#
-# Note: To get rid of all source code in the generated output, make sure that
-# also VERBATIM_HEADERS is set to NO.
-# The default value is: NO.
-
-SOURCE_BROWSER         = NO
-
-# Setting the INLINE_SOURCES tag to YES will include the body of functions,
-# classes and enums directly into the documentation.
-# The default value is: NO.
-
-INLINE_SOURCES         = NO
-
-# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
-# special comment blocks from generated source code fragments. Normal C, C++ and
-# Fortran comments will always remain visible.
-# The default value is: YES.
-
-STRIP_CODE_COMMENTS    = YES
-
-# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
-# function all documented functions referencing it will be listed.
-# The default value is: NO.
-
-REFERENCED_BY_RELATION = NO
-
-# If the REFERENCES_RELATION tag is set to YES then for each documented function
-# all documented entities called/used by that function will be listed.
-# The default value is: NO.
-
-REFERENCES_RELATION    = NO
-
-# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
-# to YES, then the hyperlinks from functions in REFERENCES_RELATION and
-# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
-# link to the documentation.
-# The default value is: YES.
-
-REFERENCES_LINK_SOURCE = YES
-
-# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
-# source code will show a tooltip with additional information such as prototype,
-# brief description and links to the definition and documentation. Since this
-# will make the HTML file larger and loading of large files a bit slower, you
-# can opt to disable this feature.
-# The default value is: YES.
-# This tag requires that the tag SOURCE_BROWSER is set to YES.
-
-#SOURCE_TOOLTIPS        = YES
-
-# If the USE_HTAGS tag is set to YES then the references to source code will
-# point to the HTML generated by the htags(1) tool instead of doxygen built-in
-# source browser. The htags tool is part of GNU's global source tagging system
-# (see http://www.gnu.org/software/global/global.html). You will need version
-# 4.8.6 or higher.
-#
-# To use it do the following:
-# - Install the latest version of global
-# - Enable SOURCE_BROWSER and USE_HTAGS in the config file
-# - Make sure the INPUT points to the root of the source tree
-# - Run doxygen as normal
-#
-# Doxygen will invoke htags (and that will in turn invoke gtags), so these
-# tools must be available from the command line (i.e. in the search path).
-#
-# The result: instead of the source browser generated by doxygen, the links to
-# source code will now point to the output of htags.
-# The default value is: NO.
-# This tag requires that the tag SOURCE_BROWSER is set to YES.
-
-USE_HTAGS              = NO
-
-# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
-# verbatim copy of the header file for each class for which an include is
-# specified. Set to NO to disable this.
-# See also: Section \class.
-# The default value is: YES.
-
-VERBATIM_HEADERS       = YES
-
-# If the CLANG_ASSISTED_PARSING tag is set to YES, then doxygen will use the
-# clang parser (see: http://clang.llvm.org/) for more accurate parsing at the
-# cost of reduced performance. This can be particularly helpful with template
-# rich C++ code for which doxygen's built-in parser lacks the necessary type
-# information.
-# Note: The availability of this option depends on whether or not doxygen was
-# compiled with the --with-libclang option.
-# The default value is: NO.
-
-#CLANG_ASSISTED_PARSING = NO
-
-# If clang assisted parsing is enabled you can provide the compiler with command
-# line options that you would normally use when invoking the compiler. Note that
-# the include paths will already be set by doxygen for the files and directories
-# specified with INPUT and INCLUDE_PATH.
-# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.
-
-#CLANG_OPTIONS          =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the alphabetical class index
-#---------------------------------------------------------------------------
-
-# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
-# compounds will be generated. Enable this if the project contains a lot of
-# classes, structs, unions or interfaces.
-# The default value is: YES.
-
-ALPHABETICAL_INDEX     = YES
-
-# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
-# which the alphabetical index list will be split.
-# Minimum value: 1, maximum value: 20, default value: 5.
-# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
-
-COLS_IN_ALPHA_INDEX    = 5
-
-# In case all classes in a project start with a common prefix, all classes will
-# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
-# can be used to specify a prefix (or a list of prefixes) that should be ignored
-# while generating the index headers.
-# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
-
-IGNORE_PREFIX          =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the HTML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_HTML tag is set to YES doxygen will generate HTML output
-# The default value is: YES.
-
-GENERATE_HTML          = YES
-
-# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it.
-# The default directory is: html.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_OUTPUT            = html
-
-# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
-# generated HTML page (for example: .htm, .php, .asp).
-# The default value is: .html.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_FILE_EXTENSION    = .html
-
-# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
-# each generated HTML page. If the tag is left blank doxygen will generate a
-# standard header.
-#
-# To get valid HTML the header file that includes any scripts and style sheets
-# that doxygen needs, which is dependent on the configuration options used (e.g.
-# the setting GENERATE_TREEVIEW). It is highly recommended to start with a
-# default header using
-# doxygen -w html new_header.html new_footer.html new_stylesheet.css
-# YourConfigFile
-# and then modify the file new_header.html. See also section "Doxygen usage"
-# for information on how to generate the default header that doxygen normally
-# uses.
-# Note: The header is subject to change so you typically have to regenerate the
-# default header when upgrading to a newer version of doxygen. For a description
-# of the possible markers and block names see the documentation.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_HEADER            =
-
-# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
-# generated HTML page. If the tag is left blank doxygen will generate a standard
-# footer. See HTML_HEADER for more information on how to generate a default
-# footer and what special commands can be used inside the footer. See also
-# section "Doxygen usage" for information on how to generate the default footer
-# that doxygen normally uses.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_FOOTER            =
-
-# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
-# sheet that is used by each HTML page. It can be used to fine-tune the look of
-# the HTML output. If left blank doxygen will generate a default style sheet.
-# See also section "Doxygen usage" for information on how to generate the style
-# sheet that doxygen normally uses.
-# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
-# it is more robust and this tag (HTML_STYLESHEET) will in the future become
-# obsolete.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_STYLESHEET        =
-
-# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
-# cascading style sheets that are included after the standard style sheets
-# created by doxygen. Using this option one can overrule certain style aspects.
-# This is preferred over using HTML_STYLESHEET since it does not replace the
-# standard style sheet and is therefor more robust against future updates.
-# Doxygen will copy the style sheet files to the output directory.
-# Note: The order of the extra stylesheet files is of importance (e.g. the last
-# stylesheet in the list overrules the setting of the previous ones in the
-# list). For an example see the documentation.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-#HTML_EXTRA_STYLESHEET  =
-
-# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
-# other source files which should be copied to the HTML output directory. Note
-# that these files will be copied to the base HTML output directory. Use the
-# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
-# files. In the HTML_STYLESHEET file, use the file name only. Also note that the
-# files will be copied as-is; there are no commands or markers available.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_EXTRA_FILES       =
-
-# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
-# will adjust the colors in the stylesheet and background images according to
-# this color. Hue is specified as an angle on a colorwheel, see
-# http://en.wikipedia.org/wiki/Hue for more information. For instance the value
-# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
-# purple, and 360 is red again.
-# Minimum value: 0, maximum value: 359, default value: 220.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_COLORSTYLE_HUE    = 220
-
-# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
-# in the HTML output. For a value of 0 the output will use grayscales only. A
-# value of 255 will produce the most vivid colors.
-# Minimum value: 0, maximum value: 255, default value: 100.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_COLORSTYLE_SAT    = 100
-
-# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
-# luminance component of the colors in the HTML output. Values below 100
-# gradually make the output lighter, whereas values above 100 make the output
-# darker. The value divided by 100 is the actual gamma applied, so 80 represents
-# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
-# change the gamma.
-# Minimum value: 40, maximum value: 240, default value: 80.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_COLORSTYLE_GAMMA  = 80
-
-# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
-# page will contain the date and time when the page was generated. Setting this
-# to NO can help when comparing the output of multiple runs.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_TIMESTAMP         = YES
-
-# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
-# documentation will contain sections that can be hidden and shown after the
-# page has loaded.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_DYNAMIC_SECTIONS  = NO
-
-# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
-# shown in the various tree structured indices initially; the user can expand
-# and collapse entries dynamically later on. Doxygen will expand the tree to
-# such a level that at most the specified number of entries are visible (unless
-# a fully collapsed tree already exceeds this amount). So setting the number of
-# entries 1 will produce a full collapsed tree by default. 0 is a special value
-# representing an infinite number of entries and will result in a full expanded
-# tree by default.
-# Minimum value: 0, maximum value: 9999, default value: 100.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-#HTML_INDEX_NUM_ENTRIES = 100
-
-# If the GENERATE_DOCSET tag is set to YES, additional index files will be
-# generated that can be used as input for Apple's Xcode 3 integrated development
-# environment (see: http://developer.apple.com/tools/xcode/), introduced with
-# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
-# Makefile in the HTML output directory. Running make will produce the docset in
-# that directory and running make install will install the docset in
-# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
-# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html
-# for more information.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_DOCSET        = NO
-
-# This tag determines the name of the docset feed. A documentation feed provides
-# an umbrella under which multiple documentation sets from a single provider
-# (such as a company or product suite) can be grouped.
-# The default value is: Doxygen generated docs.
-# This tag requires that the tag GENERATE_DOCSET is set to YES.
-
-DOCSET_FEEDNAME        = "Doxygen generated docs"
-
-# This tag specifies a string that should uniquely identify the documentation
-# set bundle. This should be a reverse domain-name style string, e.g.
-# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
-# The default value is: org.doxygen.Project.
-# This tag requires that the tag GENERATE_DOCSET is set to YES.
-
-DOCSET_BUNDLE_ID       = org.doxygen.Project
-
-# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
-# the documentation publisher. This should be a reverse domain-name style
-# string, e.g. com.mycompany.MyDocSet.documentation.
-# The default value is: org.doxygen.Publisher.
-# This tag requires that the tag GENERATE_DOCSET is set to YES.
-
-DOCSET_PUBLISHER_ID    = org.doxygen.Publisher
-
-# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
-# The default value is: Publisher.
-# This tag requires that the tag GENERATE_DOCSET is set to YES.
-
-DOCSET_PUBLISHER_NAME  = Publisher
-
-# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
-# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
-# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
-# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
-# Windows.
-#
-# The HTML Help Workshop contains a compiler that can convert all HTML output
-# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
-# files are now used as the Windows 98 help format, and will replace the old
-# Windows help format (.hlp) on all Windows platforms in the future. Compressed
-# HTML files also contain an index, a table of contents, and you can search for
-# words in the documentation. The HTML workshop also contains a viewer for
-# compressed HTML files.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_HTMLHELP      = NO
-
-# The CHM_FILE tag can be used to specify the file name of the resulting .chm
-# file. You can add a path in front of the file if the result should not be
-# written to the html output directory.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-CHM_FILE               =
-
-# The HHC_LOCATION tag can be used to specify the location (absolute path
-# including file name) of the HTML help compiler ( hhc.exe). If non-empty
-# doxygen will try to run the HTML help compiler on the generated index.hhp.
-# The file has to be specified with full path.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-HHC_LOCATION           =
-
-# The GENERATE_CHI flag controls if a separate .chi index file is generated (
-# YES) or that it should be included in the master .chm file ( NO).
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-GENERATE_CHI           = NO
-
-# The CHM_INDEX_ENCODING is used to encode HtmlHelp index ( hhk), content ( hhc)
-# and project file content.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-CHM_INDEX_ENCODING     =
-
-# The BINARY_TOC flag controls whether a binary table of contents is generated (
-# YES) or a normal table of contents ( NO) in the .chm file. Furthermore it
-# enables the Previous and Next buttons.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-BINARY_TOC             = NO
-
-# The TOC_EXPAND flag can be set to YES to add extra items for group members to
-# the table of contents of the HTML help documentation and to the tree view.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-TOC_EXPAND             = NO
-
-# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
-# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
-# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
-# (.qch) of the generated HTML documentation.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_QHP           = NO
-
-# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
-# the file name of the resulting .qch file. The path specified is relative to
-# the HTML output folder.
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QCH_FILE               =
-
-# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
-# Project output. For more information please see Qt Help Project / Namespace
-# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).
-# The default value is: org.doxygen.Project.
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_NAMESPACE          = org.doxygen.Project
-
-# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
-# Help Project output. For more information please see Qt Help Project / Virtual
-# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-
-# folders).
-# The default value is: doc.
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_VIRTUAL_FOLDER     = doc
-
-# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
-# filter to add. For more information please see Qt Help Project / Custom
-# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
-# filters).
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_CUST_FILTER_NAME   =
-
-# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
-# custom filter to add. For more information please see Qt Help Project / Custom
-# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
-# filters).
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_CUST_FILTER_ATTRS  =
-
-# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
-# project's filter section matches. Qt Help Project / Filter Attributes (see:
-# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_SECT_FILTER_ATTRS  =
-
-# The QHG_LOCATION tag can be used to specify the location of Qt's
-# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
-# generated .qhp file.
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHG_LOCATION           =
-
-# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
-# generated, together with the HTML files, they form an Eclipse help plugin. To
-# install this plugin and make it available under the help contents menu in
-# Eclipse, the contents of the directory containing the HTML and XML files needs
-# to be copied into the plugins directory of eclipse. The name of the directory
-# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
-# After copying Eclipse needs to be restarted before the help appears.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_ECLIPSEHELP   = NO
-
-# A unique identifier for the Eclipse help plugin. When installing the plugin
-# the directory name containing the HTML and XML files should also have this
-# name. Each documentation set should have its own identifier.
-# The default value is: org.doxygen.Project.
-# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
-
-ECLIPSE_DOC_ID         = org.doxygen.Project
-
-# If you want full control over the layout of the generated HTML pages it might
-# be necessary to disable the index and replace it with your own. The
-# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
-# of each HTML page. A value of NO enables the index and the value YES disables
-# it. Since the tabs in the index contain the same information as the navigation
-# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-DISABLE_INDEX          = NO
-
-# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
-# structure should be generated to display hierarchical information. If the tag
-# value is set to YES, a side panel will be generated containing a tree-like
-# index structure (just like the one that is generated for HTML Help). For this
-# to work a browser that supports JavaScript, DHTML, CSS and frames is required
-# (i.e. any modern browser). Windows users are probably better off using the
-# HTML help feature. Via custom stylesheets (see HTML_EXTRA_STYLESHEET) one can
-# further fine-tune the look of the index. As an example, the default style
-# sheet generated by doxygen has an example that shows how to put an image at
-# the root of the tree instead of the PROJECT_NAME. Since the tree basically has
-# the same information as the tab index, you could consider setting
-# DISABLE_INDEX to YES when enabling this option.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_TREEVIEW      = NO
-
-# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
-# doxygen will group on one line in the generated HTML documentation.
-#
-# Note that a value of 0 will completely suppress the enum values from appearing
-# in the overview section.
-# Minimum value: 0, maximum value: 20, default value: 4.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-ENUM_VALUES_PER_LINE   = 4
-
-# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
-# to set the initial width (in pixels) of the frame in which the tree is shown.
-# Minimum value: 0, maximum value: 1500, default value: 250.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-TREEVIEW_WIDTH         = 250
-
-# When the EXT_LINKS_IN_WINDOW option is set to YES doxygen will open links to
-# external symbols imported via tag files in a separate window.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-EXT_LINKS_IN_WINDOW    = NO
-
-# Use this tag to change the font size of LaTeX formulas included as images in
-# the HTML documentation. When you change the font size after a successful
-# doxygen run you need to manually remove any form_*.png images from the HTML
-# output directory to force them to be regenerated.
-# Minimum value: 8, maximum value: 50, default value: 10.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-FORMULA_FONTSIZE       = 10
-
-# Use the FORMULA_TRANPARENT tag to determine whether or not the images
-# generated for formulas are transparent PNGs. Transparent PNGs are not
-# supported properly for IE 6.0, but are supported on all modern browsers.
-#
-# Note that when changing this option you need to delete any form_*.png files in
-# the HTML output directory before the changes have effect.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-FORMULA_TRANSPARENT    = YES
-
-# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
-# http://www.mathjax.org) which uses client side Javascript for the rendering
-# instead of using prerendered bitmaps. Use this if you do not have LaTeX
-# installed or if you want to formulas look prettier in the HTML output. When
-# enabled you may also need to install MathJax separately and configure the path
-# to it using the MATHJAX_RELPATH option.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-USE_MATHJAX            = NO
-
-# When MathJax is enabled you can set the default output format to be used for
-# the MathJax output. See the MathJax site (see:
-# http://docs.mathjax.org/en/latest/output.html) for more details.
-# Possible values are: HTML-CSS (which is slower, but has the best
-# compatibility), NativeMML (i.e. MathML) and SVG.
-# The default value is: HTML-CSS.
-# This tag requires that the tag USE_MATHJAX is set to YES.
-
-#MATHJAX_FORMAT         = HTML-CSS
-
-# When MathJax is enabled you need to specify the location relative to the HTML
-# output directory using the MATHJAX_RELPATH option. The destination directory
-# should contain the MathJax.js script. For instance, if the mathjax directory
-# is located at the same level as the HTML output directory, then
-# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
-# Content Delivery Network so you can quickly see the result without installing
-# MathJax. However, it is strongly recommended to install a local copy of
-# MathJax from http://www.mathjax.org before deployment.
-# The default value is: http://cdn.mathjax.org/mathjax/latest.
-# This tag requires that the tag USE_MATHJAX is set to YES.
-
-MATHJAX_RELPATH        = http://www.mathjax.org/mathjax
-
-# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
-# extension names that should be enabled during MathJax rendering. For example
-# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
-# This tag requires that the tag USE_MATHJAX is set to YES.
-
-MATHJAX_EXTENSIONS     =
-
-# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
-# of code that will be used on startup of the MathJax code. See the MathJax site
-# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
-# example see the documentation.
-# This tag requires that the tag USE_MATHJAX is set to YES.
-
-#MATHJAX_CODEFILE       =
-
-# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
-# the HTML output. The underlying search engine uses javascript and DHTML and
-# should work on any modern browser. Note that when using HTML help
-# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
-# there is already a search function so this one should typically be disabled.
-# For large projects the javascript based search engine can be slow, then
-# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
-# search using the keyboard; to jump to the search box use <access key> + S
-# (what the <access key> is depends on the OS and browser, but it is typically
-# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down
-# key> to jump into the search results window, the results can be navigated
-# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel
-# the search. The filter options can be selected when the cursor is inside the
-# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>
-# to select a filter and <Enter> or <escape> to activate or cancel the filter
-# option.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-SEARCHENGINE           = YES
-
-# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
-# implemented using a web server instead of a web client using Javascript. There
-# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
-# setting. When disabled, doxygen will generate a PHP script for searching and
-# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
-# and searching needs to be provided by external tools. See the section
-# "External Indexing and Searching" for details.
-# The default value is: NO.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-SERVER_BASED_SEARCH    = NO
-
-# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
-# script for searching. Instead the search results are written to an XML file
-# which needs to be processed by an external indexer. Doxygen will invoke an
-# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
-# search results.
-#
-# Doxygen ships with an example indexer ( doxyindexer) and search engine
-# (doxysearch.cgi) which are based on the open source search engine library
-# Xapian (see: http://xapian.org/).
-#
-# See the section "External Indexing and Searching" for details.
-# The default value is: NO.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-#EXTERNAL_SEARCH        = NO
-
-# The SEARCHENGINE_URL should point to a search engine hosted by a web server
-# which will return the search results when EXTERNAL_SEARCH is enabled.
-#
-# Doxygen ships with an example indexer ( doxyindexer) and search engine
-# (doxysearch.cgi) which are based on the open source search engine library
-# Xapian (see: http://xapian.org/). See the section "External Indexing and
-# Searching" for details.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-#SEARCHENGINE_URL       =
-
-# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
-# search data is written to a file for indexing by an external tool. With the
-# SEARCHDATA_FILE tag the name of this file can be specified.
-# The default file is: searchdata.xml.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-#SEARCHDATA_FILE        = searchdata.xml
-
-# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
-# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
-# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
-# projects and redirect the results back to the right project.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-#EXTERNAL_SEARCH_ID     =
-
-# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
-# projects other than the one defined by this configuration file, but that are
-# all added to the same external search index. Each project needs to have a
-# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
-# to a relative location where the documentation can be found. The format is:
-# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-#EXTRA_SEARCH_MAPPINGS  =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the LaTeX output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_LATEX tag is set to YES doxygen will generate LaTeX output.
-# The default value is: YES.
-
-GENERATE_LATEX         = YES
-
-# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it.
-# The default directory is: latex.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_OUTPUT           = latex
-
-# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
-# invoked.
-#
-# Note that when enabling USE_PDFLATEX this option is only used for generating
-# bitmaps for formulas in the HTML output, but not in the Makefile that is
-# written to the output directory.
-# The default file is: latex.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_CMD_NAME         = latex
-
-# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
-# index for LaTeX.
-# The default file is: makeindex.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-MAKEINDEX_CMD_NAME     = makeindex
-
-# If the COMPACT_LATEX tag is set to YES doxygen generates more compact LaTeX
-# documents. This may be useful for small projects and may help to save some
-# trees in general.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-COMPACT_LATEX          = NO
-
-# The PAPER_TYPE tag can be used to set the paper type that is used by the
-# printer.
-# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
-# 14 inches) and executive (7.25 x 10.5 inches).
-# The default value is: a4.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-PAPER_TYPE             = a4
-
-# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
-# that should be included in the LaTeX output. To get the times font for
-# instance you can specify
-# EXTRA_PACKAGES=times
-# If left blank no extra packages will be included.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-EXTRA_PACKAGES         =
-
-# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
-# generated LaTeX document. The header should contain everything until the first
-# chapter. If it is left blank doxygen will generate a standard header. See
-# section "Doxygen usage" for information on how to let doxygen write the
-# default header to a separate file.
-#
-# Note: Only use a user-defined header if you know what you are doing! The
-# following commands have a special meaning inside the header: $title,
-# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
-# $projectbrief, $projectlogo. Doxygen will replace $title with the empy string,
-# for the replacement values of the other commands the user is refered to
-# HTML_HEADER.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_HEADER           =
-
-# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
-# generated LaTeX document. The footer should contain everything after the last
-# chapter. If it is left blank doxygen will generate a standard footer. See
-# LATEX_HEADER for more information on how to generate a default footer and what
-# special commands can be used inside the footer.
-#
-# Note: Only use a user-defined footer if you know what you are doing!
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_FOOTER           =
-
-# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
-# other source files which should be copied to the LATEX_OUTPUT output
-# directory. Note that the files will be copied as-is; there are no commands or
-# markers available.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-#LATEX_EXTRA_FILES      =
-
-# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
-# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
-# contain links (just like the HTML output) instead of page references. This
-# makes the output suitable for online browsing using a PDF viewer.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-PDF_HYPERLINKS         = YES
-
-# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
-# the PDF file directly from the LaTeX files. Set this option to YES to get a
-# higher quality PDF documentation.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-USE_PDFLATEX           = YES
-
-# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
-# command to the generated LaTeX files. This will instruct LaTeX to keep running
-# if errors occur, instead of asking the user for help. This option is also used
-# when generating formulas in HTML.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_BATCHMODE        = NO
-
-# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
-# index chapters (such as File Index, Compound Index, etc.) in the output.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_HIDE_INDICES     = NO
-
-# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
-# code with syntax highlighting in the LaTeX output.
-#
-# Note that which sources are shown also depends on other settings such as
-# SOURCE_BROWSER.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_SOURCE_CODE      = NO
-
-# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
-# bibliography, e.g. plainnat, or ieeetr. See
-# http://en.wikipedia.org/wiki/BibTeX and \cite for more info.
-# The default value is: plain.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_BIB_STYLE        = plain
-
-#---------------------------------------------------------------------------
-# Configuration options related to the RTF output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_RTF tag is set to YES doxygen will generate RTF output. The
-# RTF output is optimized for Word 97 and may not look too pretty with other RTF
-# readers/editors.
-# The default value is: NO.
-
-GENERATE_RTF           = NO
-
-# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it.
-# The default directory is: rtf.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-RTF_OUTPUT             = rtf
-
-# If the COMPACT_RTF tag is set to YES doxygen generates more compact RTF
-# documents. This may be useful for small projects and may help to save some
-# trees in general.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-COMPACT_RTF            = NO
-
-# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
-# contain hyperlink fields. The RTF file will contain links (just like the HTML
-# output) instead of page references. This makes the output suitable for online
-# browsing using Word or some other Word compatible readers that support those
-# fields.
-#
-# Note: WordPad (write) and others do not support links.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-RTF_HYPERLINKS         = NO
-
-# Load stylesheet definitions from file. Syntax is similar to doxygen's config
-# file, i.e. a series of assignments. You only have to provide replacements,
-# missing definitions are set to their default value.
-#
-# See also section "Doxygen usage" for information on how to generate the
-# default style sheet that doxygen normally uses.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-RTF_STYLESHEET_FILE    =
-
-# Set optional variables used in the generation of an RTF document. Syntax is
-# similar to doxygen's config file. A template extensions file can be generated
-# using doxygen -e rtf extensionFile.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-RTF_EXTENSIONS_FILE    =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the man page output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_MAN tag is set to YES doxygen will generate man pages for
-# classes and files.
-# The default value is: NO.
-
-GENERATE_MAN           = NO
-
-# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it. A directory man3 will be created inside the directory specified by
-# MAN_OUTPUT.
-# The default directory is: man.
-# This tag requires that the tag GENERATE_MAN is set to YES.
-
-MAN_OUTPUT             = man
-
-# The MAN_EXTENSION tag determines the extension that is added to the generated
-# man pages. In case the manual section does not start with a number, the number
-# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
-# optional.
-# The default value is: .3.
-# This tag requires that the tag GENERATE_MAN is set to YES.
-
-MAN_EXTENSION          = .3
-
-# The MAN_SUBDIR tag determines the name of the directory created within
-# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by
-# MAN_EXTENSION with the initial . removed.
-# This tag requires that the tag GENERATE_MAN is set to YES.
-
-#MAN_SUBDIR             =
-
-# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
-# will generate one additional man file for each entity documented in the real
-# man page(s). These additional files only source the real man page, but without
-# them the man command would be unable to find the correct page.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_MAN is set to YES.
-
-MAN_LINKS              = NO
-
-#---------------------------------------------------------------------------
-# Configuration options related to the XML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_XML tag is set to YES doxygen will generate an XML file that
-# captures the structure of the code including all documentation.
-# The default value is: NO.
-
-GENERATE_XML           = YES
-
-# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it.
-# The default directory is: xml.
-# This tag requires that the tag GENERATE_XML is set to YES.
-
-XML_OUTPUT             = xml
-
-# If the XML_PROGRAMLISTING tag is set to YES doxygen will dump the program
-# listings (including syntax highlighting and cross-referencing information) to
-# the XML output. Note that enabling this will significantly increase the size
-# of the XML output.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_XML is set to YES.
-
-XML_PROGRAMLISTING     = YES
-
-#---------------------------------------------------------------------------
-# Configuration options related to the DOCBOOK output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_DOCBOOK tag is set to YES doxygen will generate Docbook files
-# that can be used to generate PDF.
-# The default value is: NO.
-
-#GENERATE_DOCBOOK       = NO
-
-# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
-# front of it.
-# The default directory is: docbook.
-# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
-
-#DOCBOOK_OUTPUT         = docbook
-
-# If the DOCBOOK_PROGRAMLISTING tag is set to YES doxygen will include the
-# program listings (including syntax highlighting and cross-referencing
-# information) to the DOCBOOK output. Note that enabling this will significantly
-# increase the size of the DOCBOOK output.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
-
-#DOCBOOK_PROGRAMLISTING = NO
-
-#---------------------------------------------------------------------------
-# Configuration options for the AutoGen Definitions output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_AUTOGEN_DEF tag is set to YES doxygen will generate an AutoGen
-# Definitions (see http://autogen.sf.net) file that captures the structure of
-# the code including all documentation. Note that this feature is still
-# experimental and incomplete at the moment.
-# The default value is: NO.
-
-GENERATE_AUTOGEN_DEF   = NO
-
-#---------------------------------------------------------------------------
-# Configuration options related to the Perl module output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_PERLMOD tag is set to YES doxygen will generate a Perl module
-# file that captures the structure of the code including all documentation.
-#
-# Note that this feature is still experimental and incomplete at the moment.
-# The default value is: NO.
-
-GENERATE_PERLMOD       = NO
-
-# If the PERLMOD_LATEX tag is set to YES doxygen will generate the necessary
-# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
-# output from the Perl module output.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_PERLMOD is set to YES.
-
-PERLMOD_LATEX          = NO
-
-# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be nicely
-# formatted so it can be parsed by a human reader. This is useful if you want to
-# understand what is going on. On the other hand, if this tag is set to NO the
-# size of the Perl module output will be much smaller and Perl will parse it
-# just the same.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_PERLMOD is set to YES.
-
-PERLMOD_PRETTY         = YES
-
-# The names of the make variables in the generated doxyrules.make file are
-# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
-# so different doxyrules.make files included by the same Makefile don't
-# overwrite each other's variables.
-# This tag requires that the tag GENERATE_PERLMOD is set to YES.
-
-PERLMOD_MAKEVAR_PREFIX =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the preprocessor
-#---------------------------------------------------------------------------
-
-# If the ENABLE_PREPROCESSING tag is set to YES doxygen will evaluate all
-# C-preprocessor directives found in the sources and include files.
-# The default value is: YES.
-
-ENABLE_PREPROCESSING   = YES
-
-# If the MACRO_EXPANSION tag is set to YES doxygen will expand all macro names
-# in the source code. If set to NO only conditional compilation will be
-# performed. Macro expansion can be done in a controlled way by setting
-# EXPAND_ONLY_PREDEF to YES.
-# The default value is: NO.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-MACRO_EXPANSION        = NO
-
-# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
-# the macro expansion is limited to the macros specified with the PREDEFINED and
-# EXPAND_AS_DEFINED tags.
-# The default value is: NO.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-EXPAND_ONLY_PREDEF     = NO
-
-# If the SEARCH_INCLUDES tag is set to YES the includes files in the
-# INCLUDE_PATH will be searched if a #include is found.
-# The default value is: YES.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-SEARCH_INCLUDES        = YES
-
-# The INCLUDE_PATH tag can be used to specify one or more directories that
-# contain include files that are not input files but should be processed by the
-# preprocessor.
-# This tag requires that the tag SEARCH_INCLUDES is set to YES.
-
-INCLUDE_PATH           =
-
-# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
-# patterns (like *.h and *.hpp) to filter out the header-files in the
-# directories. If left blank, the patterns specified with FILE_PATTERNS will be
-# used.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-INCLUDE_FILE_PATTERNS  =
-
-# The PREDEFINED tag can be used to specify one or more macro names that are
-# defined before the preprocessor is started (similar to the -D option of e.g.
-# gcc). The argument of the tag is a list of macros of the form: name or
-# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
-# is assumed. To prevent a macro definition from being undefined via #undef or
-# recursively expanded use the := operator instead of the = operator.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-PREDEFINED             = DMLC_USE_CXX11
-
-# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
-# tag can be used to specify a list of macro names that should be expanded. The
-# macro definition that is found in the sources will be used. Use the PREDEFINED
-# tag if you want to use a different macro definition that overrules the
-# definition found in the source code.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-EXPAND_AS_DEFINED      =
-
-# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
-# remove all references to function-like macros that are alone on a line, have
-# an all uppercase name, and do not end with a semicolon. Such function macros
-# are typically used for boiler-plate code, and will confuse the parser if not
-# removed.
-# The default value is: YES.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-SKIP_FUNCTION_MACROS   = YES
-
-#---------------------------------------------------------------------------
-# Configuration options related to external references
-#---------------------------------------------------------------------------
-
-# The TAGFILES tag can be used to specify one or more tag files. For each tag
-# file the location of the external documentation should be added. The format of
-# a tag file without this location is as follows:
-# TAGFILES = file1 file2 ...
-# Adding location for the tag files is done as follows:
-# TAGFILES = file1=loc1 "file2 = loc2" ...
-# where loc1 and loc2 can be relative or absolute paths or URLs. See the
-# section "Linking to external documentation" for more information about the use
-# of tag files.
-# Note: Each tag file must have a unique name (where the name does NOT include
-# the path). If a tag file is not located in the directory in which doxygen is
-# run, you must also specify the path to the tagfile here.
-
-TAGFILES               =
-
-# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
-# tag file that is based on the input files it reads. See section "Linking to
-# external documentation" for more information about the usage of tag files.
-
-GENERATE_TAGFILE       =
-
-# If the ALLEXTERNALS tag is set to YES all external class will be listed in the
-# class index. If set to NO only the inherited external classes will be listed.
-# The default value is: NO.
-
-ALLEXTERNALS           = NO
-
-# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed in
-# the modules index. If set to NO, only the current project's groups will be
-# listed.
-# The default value is: YES.
-
-EXTERNAL_GROUPS        = YES
-
-# If the EXTERNAL_PAGES tag is set to YES all external pages will be listed in
-# the related pages index. If set to NO, only the current project's pages will
-# be listed.
-# The default value is: YES.
-
-#EXTERNAL_PAGES         = YES
-
-# The PERL_PATH should be the absolute path and name of the perl script
-# interpreter (i.e. the result of 'which perl').
-# The default file (with absolute path) is: /usr/bin/perl.
-
-PERL_PATH              = /usr/bin/perl
-
-#---------------------------------------------------------------------------
-# Configuration options related to the dot tool
-#---------------------------------------------------------------------------
-
-# If the CLASS_DIAGRAMS tag is set to YES doxygen will generate a class diagram
-# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
-# NO turns the diagrams off. Note that this option also works with HAVE_DOT
-# disabled, but it is recommended to install and use dot, since it yields more
-# powerful graphs.
-# The default value is: YES.
-
-CLASS_DIAGRAMS         = YES
-
-# You can define message sequence charts within doxygen comments using the \msc
-# command. Doxygen will then run the mscgen tool (see:
-# http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the
-# documentation. The MSCGEN_PATH tag allows you to specify the directory where
-# the mscgen tool resides. If left empty the tool is assumed to be found in the
-# default search path.
-
-MSCGEN_PATH            =
-
-# You can include diagrams made with dia in doxygen documentation. Doxygen will
-# then run dia to produce the diagram and insert it in the documentation. The
-# DIA_PATH tag allows you to specify the directory where the dia binary resides.
-# If left empty dia is assumed to be found in the default search path.
-
-#DIA_PATH               =
-
-# If set to YES, the inheritance and collaboration graphs will hide inheritance
-# and usage relations if the target is undocumented or is not a class.
-# The default value is: YES.
-
-HIDE_UNDOC_RELATIONS   = YES
-
-# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
-# available from the path. This tool is part of Graphviz (see:
-# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
-# Bell Labs. The other options in this section have no effect if this option is
-# set to NO
-# The default value is: YES.
-
-HAVE_DOT               = YES
-
-# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
-# to run in parallel. When set to 0 doxygen will base this on the number of
-# processors available in the system. You can set it explicitly to a value
-# larger than 0 to get control over the balance between CPU load and processing
-# speed.
-# Minimum value: 0, maximum value: 32, default value: 0.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_NUM_THREADS        = 0
-
-# When you want a differently looking font in the dot files that doxygen
-# generates you can specify the font name using DOT_FONTNAME. You need to make
-# sure dot is able to find the font, which can be done by putting it in a
-# standard location or by setting the DOTFONTPATH environment variable or by
-# setting DOT_FONTPATH to the directory containing the font.
-# The default value is: Helvetica.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_FONTNAME           = Helvetica
-
-# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
-# dot graphs.
-# Minimum value: 4, maximum value: 24, default value: 10.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_FONTSIZE           = 10
-
-# By default doxygen will tell dot to use the default font as specified with
-# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
-# the path where dot can find it using this tag.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_FONTPATH           =
-
-# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
-# each documented class showing the direct and indirect inheritance relations.
-# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-CLASS_GRAPH            = YES
-
-# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
-# graph for each documented class showing the direct and indirect implementation
-# dependencies (inheritance, containment, and class references variables) of the
-# class with other documented classes.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-COLLABORATION_GRAPH    = YES
-
-# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
-# groups, showing the direct groups dependencies.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-GROUP_GRAPHS           = YES
-
-# If the UML_LOOK tag is set to YES doxygen will generate inheritance and
-# collaboration diagrams in a style similar to the OMG's Unified Modeling
-# Language.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-UML_LOOK               = YES
-
-# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
-# class node. If there are many fields or methods and many nodes the graph may
-# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
-# number of items for each type to make the size more manageable. Set this to 0
-# for no limit. Note that the threshold may be exceeded by 50% before the limit
-# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
-# but if the number exceeds 15, the total amount of fields shown is limited to
-# 10.
-# Minimum value: 0, maximum value: 100, default value: 10.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-#UML_LIMIT_NUM_FIELDS   = 10
-
-# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
-# collaboration graphs will show the relations between templates and their
-# instances.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-TEMPLATE_RELATIONS     = NO
-
-# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
-# YES then doxygen will generate a graph for each documented file showing the
-# direct and indirect include dependencies of the file with other documented
-# files.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-INCLUDE_GRAPH          = YES
-
-# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
-# set to YES then doxygen will generate a graph for each documented file showing
-# the direct and indirect include dependencies of the file with other documented
-# files.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-INCLUDED_BY_GRAPH      = YES
-
-# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
-# dependency graph for every global function or class method.
-#
-# Note that enabling this option will significantly increase the time of a run.
-# So in most cases it will be better to enable call graphs for selected
-# functions only using the \callgraph command.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-CALL_GRAPH             = NO
-
-# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
-# dependency graph for every global function or class method.
-#
-# Note that enabling this option will significantly increase the time of a run.
-# So in most cases it will be better to enable caller graphs for selected
-# functions only using the \callergraph command.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-CALLER_GRAPH           = NO
-
-# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
-# hierarchy of all classes instead of a textual one.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-GRAPHICAL_HIERARCHY    = YES
-
-# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
-# dependencies a directory has on other directories in a graphical way. The
-# dependency relations are determined by the #include relations between the
-# files in the directories.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DIRECTORY_GRAPH        = YES
-
-# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
-# generated by dot.
-# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
-# to make the SVG files visible in IE 9+ (other browsers do not have this
-# requirement).
-# Possible values are: png, png:cairo, png:cairo:cairo, png:cairo:gd, png:gd,
-# png:gd:gd, jpg, jpg:cairo, jpg:cairo:gd, jpg:gd, jpg:gd:gd, gif, gif:cairo,
-# gif:cairo:gd, gif:gd, gif:gd:gd and svg.
-# The default value is: png.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_IMAGE_FORMAT       = png
-
-# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
-# enable generation of interactive SVG images that allow zooming and panning.
-#
-# Note that this requires a modern browser other than Internet Explorer. Tested
-# and working are Firefox, Chrome, Safari, and Opera.
-# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
-# the SVG files visible. Older versions of IE do not have SVG support.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-INTERACTIVE_SVG        = NO
-
-# The DOT_PATH tag can be used to specify the path where the dot tool can be
-# found. If left blank, it is assumed the dot tool can be found in the path.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_PATH               =
-
-# The DOTFILE_DIRS tag can be used to specify one or more directories that
-# contain dot files that are included in the documentation (see the \dotfile
-# command).
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOTFILE_DIRS           =
-
-# The MSCFILE_DIRS tag can be used to specify one or more directories that
-# contain msc files that are included in the documentation (see the \mscfile
-# command).
-
-MSCFILE_DIRS           =
-
-# The DIAFILE_DIRS tag can be used to specify one or more directories that
-# contain dia files that are included in the documentation (see the \diafile
-# command).
-
-#DIAFILE_DIRS           =
-
-# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the
-# path where java can find the plantuml.jar file. If left blank, it is assumed
-# PlantUML is not used or called during a preprocessing step. Doxygen will
-# generate a warning when it encounters a \startuml command in this case and
-# will not generate output for the diagram.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-#PLANTUML_JAR_PATH      =
-
-# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
-# that will be shown in the graph. If the number of nodes in a graph becomes
-# larger than this value, doxygen will truncate the graph, which is visualized
-# by representing a node as a red box. Note that doxygen if the number of direct
-# children of the root node in a graph is already larger than
-# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
-# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
-# Minimum value: 0, maximum value: 10000, default value: 50.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_GRAPH_MAX_NODES    = 50
-
-# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
-# generated by dot. A depth value of 3 means that only nodes reachable from the
-# root by following a path via at most 3 edges will be shown. Nodes that lay
-# further from the root node will be omitted. Note that setting this option to 1
-# or 2 may greatly reduce the computation time needed for large code bases. Also
-# note that the size of a graph can be further restricted by
-# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
-# Minimum value: 0, maximum value: 1000, default value: 0.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-MAX_DOT_GRAPH_DEPTH    = 0
-
-# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
-# background. This is disabled by default, because dot on Windows does not seem
-# to support this out of the box.
-#
-# Warning: Depending on the platform used, enabling this option may lead to
-# badly anti-aliased labels on the edges of a graph (i.e. they become hard to
-# read).
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_TRANSPARENT        = NO
-
-# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output
-# files in one run (i.e. multiple -o and -T options on the command line). This
-# makes dot run faster, but since only newer versions of dot (>1.8.10) support
-# this, this feature is disabled by default.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_MULTI_TARGETS      = YES
-
-# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
-# explaining the meaning of the various boxes and arrows in the dot generated
-# graphs.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-GENERATE_LEGEND        = YES
-
-# If the DOT_CLEANUP tag is set to YES doxygen will remove the intermediate dot
-# files that are used to generate the various graphs.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_CLEANUP            = YES
diff --git a/nnvm/docs/Makefile b/nnvm/docs/Makefile
deleted file mode 100644
index 1e45fb5e3787092527c9a4134e773ee4b7f03298..0000000000000000000000000000000000000000
--- a/nnvm/docs/Makefile
+++ /dev/null
@@ -1,193 +0,0 @@
-# Makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line.
-SPHINXOPTS    =
-SPHINXBUILD   = sphinx-build
-PAPER         =
-BUILDDIR      = _build
-
-# User-friendly check for sphinx-build
-ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
-$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
-endif
-
-# Internal variables.
-PAPEROPT_a4     = -D latex_paper_size=a4
-PAPEROPT_letter = -D latex_paper_size=letter
-ALLSPHINXOPTS   = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
-# the i18n builder cannot share the environment and doctrees with the others
-I18NSPHINXOPTS  = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
-
-.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
-
-help:
-	@echo "Please use \`make <target>' where <target> is one of"
-	@echo "  html       to make standalone HTML files"
-	@echo "  dirhtml    to make HTML files named index.html in directories"
-	@echo "  singlehtml to make a single large HTML file"
-	@echo "  pickle     to make pickle files"
-	@echo "  json       to make JSON files"
-	@echo "  htmlhelp   to make HTML files and a HTML help project"
-	@echo "  qthelp     to make HTML files and a qthelp project"
-	@echo "  applehelp  to make an Apple Help Book"
-	@echo "  devhelp    to make HTML files and a Devhelp project"
-	@echo "  epub       to make an epub"
-	@echo "  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
-	@echo "  latexpdf   to make LaTeX files and run them through pdflatex"
-	@echo "  latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
-	@echo "  text       to make text files"
-	@echo "  man        to make manual pages"
-	@echo "  texinfo    to make Texinfo files"
-	@echo "  info       to make Texinfo files and run them through makeinfo"
-	@echo "  gettext    to make PO message catalogs"
-	@echo "  changes    to make an overview of all changed/added/deprecated items"
-	@echo "  xml        to make Docutils-native XML files"
-	@echo "  pseudoxml  to make pseudoxml-XML files for display purposes"
-	@echo "  linkcheck  to check all external links for integrity"
-	@echo "  doctest    to run all doctests embedded in the documentation (if enabled)"
-	@echo "  coverage   to run coverage check of the documentation (if enabled)"
-
-clean:
-	rm -rf $(BUILDDIR)/*
-	rm -rf gen_modules
-
-html:
-	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
-	@echo
-	@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
-
-dirhtml:
-	$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
-	@echo
-	@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
-
-singlehtml:
-	$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
-	@echo
-	@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
-
-pickle:
-	$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
-	@echo
-	@echo "Build finished; now you can process the pickle files."
-
-json:
-	$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
-	@echo
-	@echo "Build finished; now you can process the JSON files."
-
-htmlhelp:
-	$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
-	@echo
-	@echo "Build finished; now you can run HTML Help Workshop with the" \
-	      ".hhp project file in $(BUILDDIR)/htmlhelp."
-
-qthelp:
-	$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
-	@echo
-	@echo "Build finished; now you can run "qcollectiongenerator" with the" \
-	      ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
-	@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/rabit.qhcp"
-	@echo "To view the help file:"
-	@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/rabit.qhc"
-
-applehelp:
-	$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
-	@echo
-	@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
-	@echo "N.B. You won't be able to view it unless you put it in" \
-	      "~/Library/Documentation/Help or install it in your application" \
-	      "bundle."
-
-devhelp:
-	$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
-	@echo
-	@echo "Build finished."
-	@echo "To view the help file:"
-	@echo "# mkdir -p $$HOME/.local/share/devhelp/rabit"
-	@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/rabit"
-	@echo "# devhelp"
-
-epub:
-	$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
-	@echo
-	@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
-
-latex:
-	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
-	@echo
-	@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
-	@echo "Run \`make' in that directory to run these through (pdf)latex" \
-	      "(use \`make latexpdf' here to do that automatically)."
-
-latexpdf:
-	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
-	@echo "Running LaTeX files through pdflatex..."
-	$(MAKE) -C $(BUILDDIR)/latex all-pdf
-	@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
-
-latexpdfja:
-	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
-	@echo "Running LaTeX files through platex and dvipdfmx..."
-	$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
-	@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
-
-text:
-	$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
-	@echo
-	@echo "Build finished. The text files are in $(BUILDDIR)/text."
-
-man:
-	$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
-	@echo
-	@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
-
-texinfo:
-	$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
-	@echo
-	@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
-	@echo "Run \`make' in that directory to run these through makeinfo" \
-	      "(use \`make info' here to do that automatically)."
-
-info:
-	$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
-	@echo "Running Texinfo files through makeinfo..."
-	make -C $(BUILDDIR)/texinfo info
-	@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
-
-gettext:
-	$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
-	@echo
-	@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
-
-changes:
-	$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
-	@echo
-	@echo "The overview file is in $(BUILDDIR)/changes."
-
-linkcheck:
-	$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
-	@echo
-	@echo "Link check complete; look for any errors in the above output " \
-	      "or in $(BUILDDIR)/linkcheck/output.txt."
-
-doctest:
-	$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
-	@echo "Testing of doctests in the sources finished, look at the " \
-	      "results in $(BUILDDIR)/doctest/output.txt."
-
-coverage:
-	$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
-	@echo "Testing of coverage in the sources finished, look at the " \
-	      "results in $(BUILDDIR)/coverage/python.txt."
-
-xml:
-	$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
-	@echo
-	@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
-
-pseudoxml:
-	$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
-	@echo
-	@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
diff --git a/nnvm/docs/README.txt b/nnvm/docs/README.txt
deleted file mode 100644
index 8b8c750822be974deb3a2c243e81e5e1d62bcef7..0000000000000000000000000000000000000000
--- a/nnvm/docs/README.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-The documentation of nnvm is generated with recommonmark and sphinx.
-
-- pip install sphinx>=1.5.5 sphinx-gallery sphinx_rtd_theme matplotlib Image recommonmark
-- Build tvm first in the root folder.
diff --git a/nnvm/docs/conf.py b/nnvm/docs/conf.py
deleted file mode 100644
index 64e40eb748b5af64fe2a18ac761fb803352e381d..0000000000000000000000000000000000000000
--- a/nnvm/docs/conf.py
+++ /dev/null
@@ -1,209 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# documentation build configuration file, created by
-# sphinx-quickstart on Thu Jul 23 19:40:08 2015.
-#
-# This file is execfile()d with the current directory set to its
-# containing dir.
-#
-# Note that not all possible configuration values are present in this
-# autogenerated file.
-#
-# All configuration values have a default; values that are commented out
-# serve to show the default.
-import sys
-import os, subprocess
-import shlex
-import recommonmark
-import sphinx_gallery
-from tvm.contrib import rpc, graph_runtime
-from recommonmark.parser import CommonMarkParser
-from recommonmark.transform import AutoStructify
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
-sys.path.insert(0, os.path.join(curr_path, '../python/'))
-
-# -- General configuration ------------------------------------------------
-
-# General information about the project.
-project = u'nnvm'
-author = u'%s developers' % project
-copyright = u'2017, %s' % author
-github_doc_root = 'https://github.com/dmlc/nnvm/tree/master/docs/'
-
-# add markdown parser
-CommonMarkParser.github_doc_root = github_doc_root
-source_parsers = {
-    '.md': CommonMarkParser
-}
-os.environ['NNVM_BUILD_DOC'] = '1'
-# Version information.
-import nnvm
-version = nnvm.__version__
-release = nnvm.__version__
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones
-extensions = [
-    'sphinx.ext.autodoc',
-    'sphinx.ext.autosummary',
-    'sphinx.ext.intersphinx',
-    'sphinx.ext.napoleon',
-    'sphinx.ext.mathjax',
-    'sphinx_gallery.gen_gallery',
-]
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# The suffix(es) of source filenames.
-# You can specify multiple suffix as a list of string:
-# source_suffix = ['.rst', '.md']
-source_suffix = ['.rst', '.md']
-
-# The encoding of source files.
-#source_encoding = 'utf-8-sig'
-
-# generate autosummary even if no references
-autosummary_generate = True
-
-# The master toctree document.
-master_doc = 'index'
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#
-# This is also used if you do content translation via gettext catalogs.
-# Usually you set "language" from the command line for these cases.
-language = None
-
-# There are two options for replacing |today|: either, you set today to some
-# non-false value, then it is used:
-#today = ''
-# Else, today_fmt is used as the format for a strftime call.
-#today_fmt = '%B %d, %Y'
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-exclude_patterns = ['_build']
-
-# The reST default role (used for this markup: `text`) to use for all
-# documents.
-#default_role = None
-
-# If true, '()' will be appended to :func: etc. cross-reference text.
-#add_function_parentheses = True
-
-# If true, the current module name will be prepended to all description
-# unit titles (such as .. function::).
-#add_module_names = True
-
-# If true, sectionauthor and moduleauthor directives will be shown in the
-# output. They are ignored by default.
-#show_authors = False
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = 'sphinx'
-
-# A list of ignored prefixes for module index sorting.
-#modindex_common_prefix = []
-
-# If true, keep warnings as "system message" paragraphs in the built documents.
-#keep_warnings = False
-
-# If true, `todo` and `todoList` produce output, else they produce nothing.
-todo_include_todos = False
-
-# -- Options for HTML output ----------------------------------------------
-
-# The theme is set by the make target
-html_theme = os.environ.get('NNVM_THEME', 'rtd')
-
-on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
-# only import rtd theme and set it if want to build docs locally
-if not on_rtd and html_theme == 'rtd':
-    import sphinx_rtd_theme
-    html_theme = 'sphinx_rtd_theme'
-    html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-# html_static_path = ['_static']
-
-# Output file base name for HTML help builder.
-htmlhelp_basename = project + 'doc'
-
-# -- Options for LaTeX output ---------------------------------------------
-latex_elements = {
-}
-
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title,
-#  author, documentclass [howto, manual, or own class]).
-latex_documents = [
-  (master_doc, '%s.tex' % project, project,
-   author, 'manual'),
-]
-
-# hook for doxygen
-def run_doxygen(folder):
-    """Run the doxygen make command in the designated folder."""
-    try:
-        #retcode = subprocess.call("cd %s; make doc" % folder, shell=True)
-        retcode = subprocess.call("rm -rf _build/html/doxygen", shell=True)
-        retcode = subprocess.call("mkdir -p _build/html", shell=True)
-        retcode = subprocess.call("cp -rf doxygen/html _build/html/doxygen", shell=True)
-        if retcode < 0:
-            sys.stderr.write("doxygen terminated by signal %s" % (-retcode))
-    except OSError as e:
-        sys.stderr.write("doxygen execution failed: %s" % e)
-
-intersphinx_mapping = {
-    'python': ('https://docs.python.org/{.major}'.format(sys.version_info), None),
-    'numpy': ('http://docs.scipy.org/doc/numpy/', None),
-    'scipy': ('http://docs.scipy.org/doc/scipy/reference', None),
-    'matplotlib': ('http://matplotlib.org/', None),
-    'tvm': ('http://docs.tvmlang.org/', None),
-}
-
-
-from sphinx_gallery.sorting import ExplicitOrder
-
-examples_dirs = ['../tutorials/']
-gallery_dirs = ['tutorials']
-
-subsection_order = ExplicitOrder([])
-
-def generate_doxygen_xml(app):
-    """Run the doxygen make commands if we're on the ReadTheDocs server"""
-    run_doxygen('..')
-
-def setup(app):
-    # Add hook for building doxygen xml when needed
-    # no c++ API for now
-    app.connect("builder-inited", generate_doxygen_xml)
-    app.add_config_value('recommonmark_config', {
-        'url_resolver': lambda url: github_doc_root + url,
-        'auto_doc_ref': True
-            }, True)
-    app.add_transform(AutoStructify)
-
-
-sphinx_gallery_conf = {
-    'backreferences_dir': 'gen_modules/backreferences',
-    'doc_module': ('tvm', 'nnvm', 'numpy'),
-'reference_url': {
-    'nnvm': None,
-    'tvm': 'http://docs.tvmlang.org',
-    'numpy': 'http://docs.scipy.org/doc/numpy-1.9.1'},
-    'examples_dirs': examples_dirs,
-    'gallery_dirs': gallery_dirs,
-    'subsection_order': subsection_order,
-    'find_mayavi_figures': False,
-    'filename_pattern': '.py',
-    'expected_failing_examples': []
-}
diff --git a/nnvm/docs/dev/index.rst b/nnvm/docs/dev/index.rst
deleted file mode 100644
index 0647c9cce58605ba8086e9b77756f5d421dd231b..0000000000000000000000000000000000000000
--- a/nnvm/docs/dev/index.rst
+++ /dev/null
@@ -1,9 +0,0 @@
-Design Note
-===========
-
-In this part of documentation, we share the rationale for the specific choices made when designing NNVM.
-
-.. toctree::
-   :maxdepth: 2
-
-   overview
diff --git a/nnvm/docs/how_to/contribute.md b/nnvm/docs/how_to/contribute.md
deleted file mode 100644
index 34ff2f19644c29c559ad5f6663493fbaf13b4228..0000000000000000000000000000000000000000
--- a/nnvm/docs/how_to/contribute.md
+++ /dev/null
@@ -1,102 +0,0 @@
-# Contribute to NNVM
-
-NNVM has been developed by community members.
-Everyone is more than welcome to contribute.
-It is a way to make the project better and more accessible to more users.
-NNVM compiler relies on TVM to deploy to different hardware backends.
-You can improve the compiler performance by contributing to [TVM](https://github.com/dmlc/tvm)
-
-- Please update [NEWS.md](https://github.com/dmlc/nnvm/blob/master/NEWS.md) to
-  add note on your changes to the API or added a new document.
-
-## Guidelines
-* [Submit Pull Request](#submit-pull-request)
-* [Git Workflow Howtos](#git-workflow-howtos)
-  - [How to resolve conflict with master](#how-to-resolve-conflict-with-master)
-  - [How to combine multiple commits into one](#how-to-combine-multiple-commits-into-one)
-  - [What is the consequence of force push](#what-is-the-consequence-of-force-push)
-* [Document](#document)
-* [Testcases](#testcases)
-* [Examples](#examples)
-* [Core Library](#core-library)
-* [Python Package](#python-package)
-
-## Submit Pull Request
-* Before submit, please rebase your code on the most recent version of master, you can do it by
-```bash
-git remote add upstream [url to nnvm repo]
-git fetch upstream
-git rebase upstream/master
-```
-* If you have multiple small commits,
-  it might be good to merge them together(use git rebase then squash) into more meaningful groups.
-* Send the pull request!
-  - Fix the problems reported by automatic checks
-  - If you are contributing a new module or new function, add a test.
-
-## Git Workflow Howtos
-### How to resolve conflict with master
-- First rebase to most recent master
-```bash
-# The first two steps can be skipped after you do it once.
-git remote add upstream [url to nnvm repo]
-git fetch upstream
-git rebase upstream/master
-```
-- The git may show some conflicts it cannot merge, say ```conflicted.py```.
-  - Manually modify the file to resolve the conflict.
-  - After you resolved the conflict, mark it as resolved by
-```bash
-git add conflicted.py
-```
-- Then you can continue rebase by
-```bash
-git rebase --continue
-```
-- Finally push to your fork, you may need to force push here.
-```bash
-git push --force
-```
-
-### How to combine multiple commits into one
-Sometimes we want to combine multiple commits, especially when later commits are only fixes to previous ones,
-to create a PR with set of meaningful commits. You can do it by following steps.
-- Before doing so, configure the default editor of git if you haven't done so before.
-```bash
-git config core.editor the-editor-you-like
-```
-- Assume we want to merge last 3 commits, type the following commands
-```bash
-git rebase -i HEAD~3
-```
-- It will pop up an text editor. Set the first commit as ```pick```, and change later ones to ```squash```.
-- After you saved the file, it will pop up another text editor to ask you modify the combined commit message.
-- Push the changes to your fork, you need to force push.
-```bash
-git push --force
-```
-
-### Reset to the most recent master
-You can always use git reset to reset your version to the most recent master.
-Note that all your ***local changes will get lost***.
-So only do it when you do not have local changes or when your pull request just get merged.
-```bash
-git reset --hard [hash tag of master]
-git push --force
-```
-
-### What is the consequence of force push
-The previous two tips requires force push, this is because we altered the path of the commits.
-It is fine to force push to your own fork, as long as the commits changed are only yours.
-
-## Testcases
-- All the testcases are in tests
-
-## Core Library
-- Follow Google C style for C++.
-- We use doxygen to document all the interface code.
-- You can reproduce the linter checks by typing ```make lint```
-
-## Python Package
-- Always add docstring to the new functions in numpydoc format.
-- You can reproduce the linter checks by typing ```make lint```
diff --git a/nnvm/docs/how_to/deploy.md b/nnvm/docs/how_to/deploy.md
deleted file mode 100644
index d6c011bd25841d4217c5e939458090a0215a749c..0000000000000000000000000000000000000000
--- a/nnvm/docs/how_to/deploy.md
+++ /dev/null
@@ -1,119 +0,0 @@
-Deploy Compiled Modules
-=======================
-NNVM compiled modules are fully embedded in TVM runtime as long as ```GRAPH_RUNTIME``` option
-is enabled in tvm runtime. Check out the [TVM documentation](http://docs.tvmlang.org/) for
-how to deploy TVM runtime to your system.
-
-In a nutshell, we will need three items to deploy a compiled module.
-Checkout our tutorials on getting started with NNVM compiler for more details.
-
-- The graph json data which contains the execution graph.
-- The tvm module library of compiled functions.
-- The parameter blobs for stored parameters.
-
-We can then use TVM's runtime API to deploy the compiled module.
-Here is an example in python.
-
-```python
-import tvm
-
-# tvm module for compiled functions.
-loaded_lib = tvm.module.load("deploy.so")
-# json graph
-loaded_json = open(temp.relpath("deploy.json")).read()
-# parameters in binary
-loaded_params = bytearray(open(temp.relpath("deploy.params"), "rb").read())
-
-fcreate = tvm.get_global_func("tvm.graph_runtime.create")
-ctx = tvm.gpu(0)
-gmodule = fcreate(loaded_json, loaded_lib, ctx.device_type, ctx.device_id)
-set_input, get_output, run = gmodule["set_input"], gmodule["get_output"], gmodule["run"]
-set_input("x", tvm.nd.array(x_np))
-gmodule["load_params"](loaded_params)
-run()
-out = tvm.nd.empty(shape)
-get_output(0, out)
-print(out.asnumpy())
-```
-
-An example in c++.
-```cpp
-#include <dlpack/dlpack.h>
-#include <tvm/runtime/module.h>
-#include <tvm/runtime/registry.h>
-#include <tvm/runtime/packed_func.h>
-
-#include <fstream>
-#include <iterator>
-#include <algorithm>
-
-int main()
-{
-    // tvm module for compiled functions
-    tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile("deploy.so");
-
-    // json graph
-    std::ifstream json_in("deploy.json", std::ios::in);
-    std::string json_data((std::istreambuf_iterator<char>(json_in)), std::istreambuf_iterator<char>());
-    json_in.close();
-
-    // parameters in binary
-    std::ifstream params_in("deploy.params", std::ios::binary);
-    std::string params_data((std::istreambuf_iterator<char>(params_in)), std::istreambuf_iterator<char>());
-    params_in.close();
-
-    // parameters need to be TVMByteArray type to indicate the binary data
-    TVMByteArray params_arr;
-    params_arr.data = params_data.c_str();
-    params_arr.size = params_data.length();
-
-    int dtype_code = kDLFloat;
-    int dtype_bits = 32;
-    int dtype_lanes = 1;
-    int device_type = kDLCPU;
-    int device_id = 0;
-
-    // get global function module for graph runtime
-    tvm::runtime::Module mod = (*tvm::runtime::Registry::Get("tvm.graph_runtime.create"))(json_data, mod_syslib, device_type, device_id);
-
-    DLTensor* x;
-    int in_ndim = 4;
-    int64_t in_shape[4] = {1, 3, 224, 224};
-    TVMArrayAlloc(in_shape, in_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &x);
-    // load image data saved in binary
-    std::ifstream data_fin("cat.bin", std::ios::binary);
-    data_fin.read(static_cast<char*>(x->data), 3 * 224 * 224 * 4);
-
-    // get the function from the module(set input data)
-    tvm::runtime::PackedFunc set_input = mod.GetFunction("set_input");
-    set_input("data", x);
-
-    // get the function from the module(load patameters)
-    tvm::runtime::PackedFunc load_params = mod.GetFunction("load_params");
-    load_params(params_arr);
-
-    // get the function from the module(run it)
-    tvm::runtime::PackedFunc run = mod.GetFunction("run");
-    run();
-
-    DLTensor* y;
-    int out_ndim = 1;
-    int64_t out_shape[1] = {1000, };
-    TVMArrayAlloc(out_shape, out_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &y);
-
-    // get the function from the module(get output data)
-    tvm::runtime::PackedFunc get_output = mod.GetFunction("get_output");
-    get_output(0, y);
-
-    // get the maximum position in output vector
-    auto y_iter = static_cast<float*>(y->data);
-    auto max_iter = std::max_element(y_iter, y_iter + 1000);
-    auto max_index = std::distance(y_iter, max_iter);
-    std::cout << "The maximum position in output vector is: " << max_index << std::endl;
-
-    TVMArrayFree(x);
-    TVMArrayFree(y);
-
-    return 0;
-}
-```
diff --git a/nnvm/docs/how_to/install.md b/nnvm/docs/how_to/install.md
deleted file mode 100644
index 0190658e2d6f72c93fee45a7e07427719d98aa10..0000000000000000000000000000000000000000
--- a/nnvm/docs/how_to/install.md
+++ /dev/null
@@ -1,89 +0,0 @@
-Installation Guide
-==================
-This page gives instructions on how to build and install the nnvm compiler package from
-scratch on various systems. It consists of two steps:
-
-1. First build the shared library from the C++ codes (`libnnvm_compiler.so` for linux/osx and `libnnvm_compiler.dll` for windows).
-2. Setup for the language packages (e.g. Python Package).
-
-To get started, clone nnvm repo from github. It is important to clone the submodules along, with ```--recursive``` option.
-```bash
-git clone --recursive https://github.com/dmlc/nnvm
-```
-For windows users who use github tools, you can open the git shell, and type the following command.
-```bash
-git submodule init
-git submodule update --recursive
-```
-
-NNVM compiler depend on TVM and TOPI, so make sure you install them by following [TVM document](http://docs.tvmlang.org/).
-Note that it is necessary to build TVM with LLVM support to take full benefit of NNVM compiler.
-
-## Contents
-- [Build the Shared Library](#build-the-shared-library)
-- [Python Package Installation](#python-package-installation)
-- [Solution to Installation Error](#solution-to-installation-error)
-
-## Build the Shared Library
-
-Our goal is to build the shared library:
-- On Linux/OSX the target library is `libnnvm_compiler.so`
-- On Windows the target library is `libnnvm_compiler.dll`
-
-The minimal building requirement is
-- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
-
-You can edit `make/config.mk` to change the compile options, and then build by
-`make`. If everything goes well, we can go to the specific language installation section.
-
-### Building on Windows
-
-NNVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**.
-In order to generate the VS solution file using cmake, make sure you have a recent version of cmake added to your path.
-NNVM compiler depend on tvm, please follow [TVM document](http://docs.tvmlang.org/how_to/install.html#building-on-windows)
-to build the TVM windows library. You can build the TVM in the submodule folder under nnvm.
-
-After tvm is built, we can then start to build nnvm, using the following command.
-
-```bash
-mkdir build
-cd build
-cmake -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CONFIGURATION_TYPES="Release" ..
-```
-This will generate the VS project using the MSVC 14 64 bit generator. Open the .sln file in the build directory and build with Visual Studio.
-
-## Python Package Installation
-
-The python package is located at python.
-There are several ways to install the package, in all these cases the TVM library must be present in the python env:
-
-1. Set the environment variable `PYTHONPATH` to tell python where to find
-   the library. For example, assume we cloned `nnvm` on the home directory
-   `~`. then we can added the following line in `~/.bashrc`.
-    It is ***recommended for developers*** who may change the codes.
-    The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ```setup``` again)
-
-    ```bash
-    export PYTHONPATH=/path/to/nnvm/python:${PYTHONPATH}
-    ```
-
-2. Install nnvm python bindings by `setup.py`:
-
-    ```bash
-    # install nnvm package for the current user
-    # NOTE: if you installed python via homebrew, --user is not needed during installaiton
-    #       it will be automatically installed to your user directory.
-    #       providing --user flag may trigger error during installation in such case.
-    cd python; python setup.py install --user; cd ..
-    ```
-
-## Solution to Installation Error
-
-If you encounter the problem while installation process, you can solve by updating submodules to the latest commit set.
-To update submodules to the latest commit set, type the following command.
-
-```bash
-git submodule update --init --recursive
-```
-
-*WARNING: The default commit set in submodule is the recommended setting. Using the latest commit set may lead to another compilation error or something else.*
diff --git a/nnvm/docs/index.rst b/nnvm/docs/index.rst
deleted file mode 100644
index 939c46b45bb41181dd256dcbb8e85ab1dd36346a..0000000000000000000000000000000000000000
--- a/nnvm/docs/index.rst
+++ /dev/null
@@ -1,19 +0,0 @@
-NNVM Documentation
-==================
-This is a document about NNVM and NNVM compiler.
-
-Contents
---------
-
-.. toctree::
-   :maxdepth: 1
-
-   self
-   how_to/install
-   tutorials/index
-   top
-   json_spec
-   how_to/contribute
-   how_to/deploy
-   api/python/index
-   dev/index
diff --git a/nnvm/examples/README.md b/nnvm/examples/README.md
deleted file mode 100644
index 112dc92caaaf551975ba723c101116cc46b76c78..0000000000000000000000000000000000000000
--- a/nnvm/examples/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-NNVM Examples
-=============
-This folder contains example snippets of running NNVM Compilation.
-
-- See also [Tutorials](../tutorials) for tutorials with detailed explainations.
diff --git a/nnvm/tests/ci_build/Dockerfile.gpu b/nnvm/tests/ci_build/Dockerfile.gpu
deleted file mode 100644
index 1206582ee088feaebf3a32e300d8e08bcfdf3f88..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/Dockerfile.gpu
+++ /dev/null
@@ -1,79 +0,0 @@
-FROM nvidia/cuda:8.0-cudnn7-devel
-
-# Base scripts
-RUN apt-get update --fix-missing
-
-COPY install/ubuntu_install_core.sh /install/ubuntu_install_core.sh
-RUN bash /install/ubuntu_install_core.sh
-
-COPY install/ubuntu_install_python.sh /install/ubuntu_install_python.sh
-RUN bash /install/ubuntu_install_python.sh
-
-COPY install/ubuntu_install_llvm.sh /install/ubuntu_install_llvm.sh
-RUN bash /install/ubuntu_install_llvm.sh
-
-COPY install/ubuntu_install_opencl.sh /install/ubuntu_install_opencl.sh
-RUN bash /install/ubuntu_install_opencl.sh
-
-COPY install/ubuntu_install_python_package.sh /install/ubuntu_install_python_package.sh
-RUN bash /install/ubuntu_install_python_package.sh
-
-COPY install/ubuntu_install_sphinx.sh /install/ubuntu_install_sphinx.sh
-RUN bash /install/ubuntu_install_sphinx.sh
-
-# Fix recommonmark to latest version
-RUN git clone https://github.com/rtfd/recommonmark
-RUN cd recommonmark; python setup.py install
-
-# Enable doxygen for c++ doc build
-RUN apt-get update && apt-get install -y doxygen graphviz libprotobuf-dev protobuf-compiler
-
-
-COPY install/ubuntu_install_java.sh /install/ubuntu_install_java.sh
-RUN bash /install/ubuntu_install_java.sh
-
-COPY install/ubuntu_install_nodejs.sh /install/ubuntu_install_nodejs.sh
-RUN bash /install/ubuntu_install_nodejs.sh
-
-COPY install/ubuntu_install_rocm.sh /install/ubuntu_install_rocm.sh
-RUN bash /install/ubuntu_install_rocm.sh
-
-COPY install/ubuntu_install_opengl.sh /install/ubuntu_install_opengl.sh
-RUN bash /install/ubuntu_install_opengl.sh
-
-COPY install/ubuntu_install_vulkan.sh /install/ubuntu_install_vulkan.sh
-RUN bash /install/ubuntu_install_vulkan.sh
-
-
-# DL Frameworks
-COPY install/ubuntu_install_mxnet.sh /install/ubuntu_install_mxnet.sh
-RUN bash /install/ubuntu_install_mxnet.sh
-
-COPY install/ubuntu_install_onnx.sh /install/ubuntu_install_onnx.sh
-RUN bash /install/ubuntu_install_onnx.sh
-
-COPY install/ubuntu_install_coreml.sh /install/ubuntu_install_coreml.sh
-RUN bash /install/ubuntu_install_coreml.sh
-
-COPY install/ubuntu_install_keras.sh /install/ubuntu_install_keras.sh
-RUN bash /install/ubuntu_install_keras.sh
-
-COPY install/ubuntu_install_darknet.sh /install/ubuntu_install_darknet.sh
-RUN bash /install/ubuntu_install_darknet.sh
-
-RUN pip install Pillow
-
-# Environment variables
-ENV PATH=/usr/local/nvidia/bin:${PATH}
-ENV PATH=/usr/local/cuda/bin:${PATH}
-ENV CPLUS_INCLUDE_PATH=/usr/local/cuda/include:${CPLUS_INCLUDE_PATH}
-ENV C_INCLUDE_PATH=/usr/local/cuda/include:${C_INCLUDE_PATH}
-ENV LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LIBRARY_PATH}
-ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}
-
-ENV LD_LIBRARY_PATH=/opt/rocm/lib:${LD_LIBRARY_PATH}
-ENV PATH=/node_modules/.bin:${PATH}
-ENV VULKAN_SDK=/usr/local/VulkanSDK/1.0.65.0/x86_64
-ENV PATH=${PATH}:${VULKAN_SDK}/bin
-ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${VULKAN_SDK}/lib
-ENV VK_LAYER_PATH=${VULKAN_SDK}/etc/explicit_layer.d
diff --git a/nnvm/tests/ci_build/Dockerfile.lint b/nnvm/tests/ci_build/Dockerfile.lint
deleted file mode 100644
index 4ba4ca3be294676d091c7189b26b424adc8e0923..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/Dockerfile.lint
+++ /dev/null
@@ -1,6 +0,0 @@
-# For lint test
-FROM ubuntu:16.04
-
-RUN apt-get update && apt-get install -y python-pip sudo
-RUN apt-get install -y doxygen graphviz
-RUN pip install cpplint pylint
diff --git a/nnvm/tests/ci_build/README.md b/nnvm/tests/ci_build/README.md
deleted file mode 100644
index 68f0c2d0e96a0b0b6fbb6c5f0bb593cfb25e065e..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/README.md
+++ /dev/null
@@ -1,36 +0,0 @@
-# CI Build Scripts
-
-This directory contains the files and setup instructions to run all tests.
-
-## Run locally
-
-To run locally, we need to first install
-[docker](https://docs.docker.com/engine/installation/) and
-[nvidia-docker](https://github.com/NVIDIA/nvidia-docker/wiki).
-
-Then we can run the tasks defined in the [Jenkinsfile](../../Jenkinsfile) by
-using (`ci_build.sh`)[./ci_build.sh]. For example
-
-- lint the python codes
-
-  ```bash
-  ./ci_build.sh lint make pylint
-  ```
-
-- build codes with CUDA supports
-
-  ```bash
-  ./ci_build.sh gpu tests/scripts/task_build.sh
-  ```
-
-- do the python unittest
-
-  ```bash
-  ./ci_build.sh gpu tests/scripts/task_python_test.sh
-  ```
-
-- build the documents. The results will be available at `docs/_build/html`
-
-  ```bash
-  tests/ci_build/ci_build.sh gpu tests/scripts/task_python_docs.sh
-  ```
diff --git a/nnvm/tests/ci_build/ci_build.sh b/nnvm/tests/ci_build/ci_build.sh
deleted file mode 100755
index db4283ca97532fb9179331a64467009ac5f2c117..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/ci_build.sh
+++ /dev/null
@@ -1,126 +0,0 @@
-#!/usr/bin/env bash
-#
-# Execute command within a docker container
-#
-# Usage: ci_build.sh <CONTAINER_TYPE> [--dockerfile <DOCKERFILE_PATH>] [-it]
-#                    <COMMAND>
-#
-# CONTAINER_TYPE: Type of the docker container used the run the build: e.g.,
-#                 (cpu | gpu)
-#
-# DOCKERFILE_PATH: (Optional) Path to the Dockerfile used for docker build.  If
-#                  this optional value is not supplied (via the --dockerfile
-#                  flag), will use Dockerfile.CONTAINER_TYPE in default
-#
-# COMMAND: Command to be executed in the docker container
-#
-SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
-
-# Get the command line arguments.
-CONTAINER_TYPE=$( echo "$1" | tr '[:upper:]' '[:lower:]' )
-shift 1
-
-# Dockerfile to be used in docker build
-DOCKERFILE_PATH="${SCRIPT_DIR}/Dockerfile.${CONTAINER_TYPE}"
-DOCKER_CONTEXT_PATH="${SCRIPT_DIR}"
-
-if [[ "$1" == "--dockerfile" ]]; then
-    DOCKERFILE_PATH="$2"
-    DOCKER_CONTEXT_PATH=$(dirname "${DOCKERFILE_PATH}")
-    echo "Using custom Dockerfile path: ${DOCKERFILE_PATH}"
-    echo "Using custom docker build context path: ${DOCKER_CONTEXT_PATH}"
-    shift 2
-fi
-
-if [[ "$1" == "-it" ]]; then
-    CI_DOCKER_EXTRA_PARAMS+=('-it')
-    shift 1
-fi
-
-if [[ ! -f "${DOCKERFILE_PATH}" ]]; then
-    echo "Invalid Dockerfile path: \"${DOCKERFILE_PATH}\""
-    exit 1
-fi
-
-COMMAND=("$@")
-
-# Validate command line arguments.
-if [ "$#" -lt 1 ] || [ ! -e "${SCRIPT_DIR}/Dockerfile.${CONTAINER_TYPE}" ]; then
-    supported_container_types=$( ls -1 ${SCRIPT_DIR}/Dockerfile.* | \
-        sed -n 's/.*Dockerfile\.\([^\/]*\)/\1/p' | tr '\n' ' ' )
-      echo "Usage: $(basename $0) CONTAINER_TYPE COMMAND"
-      echo "       CONTAINER_TYPE can be one of [${supported_container_types}]"
-      echo "       COMMAND is a command (with arguments) to run inside"
-      echo "               the container."
-      exit 1
-fi
-
-# Use nvidia-docker if the container is GPU.
-if [[ "${CONTAINER_TYPE}" == *"gpu"* ]]; then
-    DOCKER_BINARY="nvidia-docker"
-else
-    DOCKER_BINARY="docker"
-fi
-
-# Helper function to traverse directories up until given file is found.
-function upsearch () {
-    test / == "$PWD" && return || \
-        test -e "$1" && echo "$PWD" && return || \
-        cd .. && upsearch "$1"
-}
-
-# Set up WORKSPACE and BUILD_TAG. Jenkins will set them for you or we pick
-# reasonable defaults if you run it outside of Jenkins.
-WORKSPACE="${WORKSPACE:-${SCRIPT_DIR}/../../}"
-BUILD_TAG="${BUILD_TAG:-nnvm-ci}"
-
-# Determine the docker image name
-DOCKER_IMG_NAME="${BUILD_TAG}.${CONTAINER_TYPE}"
-
-# Under Jenkins matrix build, the build tag may contain characters such as
-# commas (,) and equal signs (=), which are not valid inside docker image names.
-DOCKER_IMG_NAME=$(echo "${DOCKER_IMG_NAME}" | sed -e 's/=/_/g' -e 's/,/-/g')
-
-# Convert to all lower-case, as per requirement of Docker image names
-DOCKER_IMG_NAME=$(echo "${DOCKER_IMG_NAME}" | tr '[:upper:]' '[:lower:]')
-
-# Print arguments.
-echo "WORKSPACE: ${WORKSPACE}"
-echo "CI_DOCKER_EXTRA_PARAMS: ${CI_DOCKER_EXTRA_PARAMS[@]}"
-echo "COMMAND: ${COMMAND[@]}"
-echo "CONTAINER_TYPE: ${CONTAINER_TYPE}"
-echo "BUILD_TAG: ${BUILD_TAG}"
-echo "DOCKER CONTAINER NAME: ${DOCKER_IMG_NAME}"
-echo ""
-
-
-# Build the docker container.
-echo "Building container (${DOCKER_IMG_NAME})..."
-docker build -t ${DOCKER_IMG_NAME} \
-    -f "${DOCKERFILE_PATH}" "${DOCKER_CONTEXT_PATH}"
-
-# Check docker build status
-if [[ $? != "0" ]]; then
-    echo "ERROR: docker build failed."
-    exit 1
-fi
-
-# Run the command inside the container.
-echo "Running '${COMMAND[@]}' inside ${DOCKER_IMG_NAME}..."
-
-# By default we cleanup - remove the container once it finish running (--rm)
-# and share the PID namespace (--pid=host) so the process inside does not have
-# pid 1 and SIGKILL is propagated to the process inside (jenkins can kill it).
-echo ${DOCKER_BINARY}
-${DOCKER_BINARY} run --rm --pid=host \
-    -v ${WORKSPACE}:/workspace \
-    -w /workspace \
-    -e "CI_BUILD_HOME=/workspace" \
-    -e "CI_BUILD_USER=$(id -u -n)" \
-    -e "CI_BUILD_UID=$(id -u)" \
-    -e "CI_BUILD_GROUP=$(id -g -n)" \
-    -e "CI_BUILD_GID=$(id -g)" \
-    ${CI_DOCKER_EXTRA_PARAMS[@]} \
-    ${DOCKER_IMG_NAME} \
-    bash tests/ci_build/with_the_same_user \
-    ${COMMAND[@]}
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_core.sh b/nnvm/tests/ci_build/install/ubuntu_install_core.sh
deleted file mode 100644
index 9823ae0788ac6c1fe18513e1a551fe7a4f722653..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_core.sh
+++ /dev/null
@@ -1,6 +0,0 @@
-# install libraries for building c++ core on ubuntu
-apt-get install -y --no-install-recommends --force-yes \
-        git make libgtest-dev cmake wget unzip libtinfo-dev libz-dev\
-        libcurl4-openssl-dev libopenblas-dev g++ sudo
-
-cd /usr/src/gtest && cmake CMakeLists.txt && make && cp *.a /usr/lib
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_coreml.sh b/nnvm/tests/ci_build/install/ubuntu_install_coreml.sh
deleted file mode 100644
index 3cfd6c2935905153486658b6e3e273b3893c0f0b..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_coreml.sh
+++ /dev/null
@@ -1 +0,0 @@
-pip2 install coremltools
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_darknet.sh b/nnvm/tests/ci_build/install/ubuntu_install_darknet.sh
deleted file mode 100644
index f5e0c2791d80f0027159e826b23cc40a1d19c18d..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_darknet.sh
+++ /dev/null
@@ -1,4 +0,0 @@
-#install the necessary dependancies, cffi, opencv
-wget 'https://github.com/siju-samuel/darknet/blob/master/lib/libdarknet.so?raw=true' -O libdarknet.so
-pip2 install opencv-python cffi
-pip3 install opencv-python cffi
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_keras.sh b/nnvm/tests/ci_build/install/ubuntu_install_keras.sh
deleted file mode 100644
index 9730d83bf46941d368f857797601f806614f8a72..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_keras.sh
+++ /dev/null
@@ -1 +0,0 @@
-pip2 install keras tensorflow h5py
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_llvm.sh b/nnvm/tests/ci_build/install/ubuntu_install_llvm.sh
deleted file mode 100644
index e5b28b911f61eb691b6b039dca7996f726ee3964..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_llvm.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-echo deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-4.0 main\
-     >> /etc/apt/sources.list.d/llvm.list
-echo deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial-4.0 main\
-     >> /etc/apt/sources.list.d/llvm.list
-
-echo deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-5.0 main\
-     >> /etc/apt/sources.list.d/llvm.list
-echo deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial-5.0 main\
-     >> /etc/apt/sources.list.d/llvm.list
-
-echo deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial main\
-     >> /etc/apt/sources.list.d/llvm.list
-echo deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial main\
-     >> /etc/apt/sources.list.d/llvm.list
-
-wget -O - http://apt.llvm.org/llvm-snapshot.gpg.key|sudo apt-key add -
-apt-get update && apt-get install -y --force-yes llvm-4.0 llvm-5.0 llvm-6.0
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_mxnet.sh b/nnvm/tests/ci_build/install/ubuntu_install_mxnet.sh
deleted file mode 100644
index ebf641e2a910881f5504c87d20b534739964424f..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_mxnet.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-pip2 install mxnet
-pip3 install mxnet
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_onnx.sh b/nnvm/tests/ci_build/install/ubuntu_install_onnx.sh
deleted file mode 100644
index 138e70d712002f424c70e201380f46ce8af96158..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_onnx.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-pip2 install onnx>=1.1.0
-pip3 install onnx>=1.1.0
-
-pip2 install http://download.pytorch.org/whl/cu75/torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl
-pip2 install torchvision
-pip3 install http://download.pytorch.org/whl/cu75/torch-0.2.0.post3-cp35-cp35m-manylinux1_x86_64.whl
-pip3 install torchvision
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_opencl.sh b/nnvm/tests/ci_build/install/ubuntu_install_opencl.sh
deleted file mode 100644
index 636236539a984b0afa82f06441442de1e324faa9..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_opencl.sh
+++ /dev/null
@@ -1,11 +0,0 @@
-# Install OpenCL runtime in nvidia docker.
-apt-get install -y --no-install-recommends --force-yes \
-        ocl-icd-libopencl1 \
-        clinfo && \
-        rm -rf /var/lib/apt/lists/*
-
-mkdir -p /etc/OpenCL/vendors && \
-    echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
-
-echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
-    echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_python.sh b/nnvm/tests/ci_build/install/ubuntu_install_python.sh
deleted file mode 100644
index c6f6f75c564e94cd6d10e382b9c9715e91d655df..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_python.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-# install python and pip, don't modify this, modify install_python_package.sh
-apt-get update && apt-get install -y python-pip python-dev python3-dev
-
-# the version of the pip shipped with ubuntu may be too lower, install a recent version here
-cd /tmp && wget https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py && python2 get-pip.py
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_python_package.sh b/nnvm/tests/ci_build/install/ubuntu_install_python_package.sh
deleted file mode 100644
index fbed2e1904cd1c044120b49eb2ed237427f4a22c..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_python_package.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-# install libraries for python package on ubuntu
-pip2 install nose pylint numpy nose-timer cython decorator scipy tornado
-pip3 install nose pylint numpy nose-timer cython decorator scipy tornado
diff --git a/nnvm/tests/ci_build/install/ubuntu_install_sphinx.sh b/nnvm/tests/ci_build/install/ubuntu_install_sphinx.sh
deleted file mode 100644
index 767643f104886ba9a941efa054d7612c35332b12..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/install/ubuntu_install_sphinx.sh
+++ /dev/null
@@ -1 +0,0 @@
-pip install sphinx==1.6.2 sphinx-gallery sphinx_rtd_theme matplotlib Image commonmark>=0.7.3 docutils>=0.11
diff --git a/nnvm/tests/ci_build/with_the_same_user b/nnvm/tests/ci_build/with_the_same_user
deleted file mode 100644
index 1e6ab883694b8d4a0fca8cda078902a58034ffbc..0000000000000000000000000000000000000000
--- a/nnvm/tests/ci_build/with_the_same_user
+++ /dev/null
@@ -1,34 +0,0 @@
-#!/usr/bin/env bash
-
-# This script is a wrapper creating the same user inside container as the one
-# running the ci_build.sh outside the container. It also set the home directory
-# for the user inside container to match the same absolute path as the workspace
-# outside of container.  Do not run this manually. It does not make sense. It is
-# intended to be called by ci_build.sh only.
-
-set -e
-
-COMMAND=("$@")
-
-if ! touch /this_is_writable_file_system; then
-  echo "You can't write to your filesystem!"
-  echo "If you are in Docker you should check you do not have too many images" \
-      "with too many files in them. Docker has some issue with it."
-  exit 1
-else
-  rm /this_is_writable_file_system
-fi
-
-getent group "${CI_BUILD_GID}" || addgroup --gid "${CI_BUILD_GID}" "${CI_BUILD_GROUP}"
-getent passwd "${CI_BUILD_UID}" || adduser --gid "${CI_BUILD_GID}" --uid "${CI_BUILD_UID}" \
-    --gecos "${CI_BUILD_USER} (generated by with_the_same_user script)" \
-    --disabled-password --home "${CI_BUILD_HOME}" --quiet "${CI_BUILD_USER}"
-usermod -a -G sudo "${CI_BUILD_USER}"
-echo "${CI_BUILD_USER} ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-nopasswd-sudo
-
-HOME=${CI_BUILD_HOME}\
-    sudo -u "#${CI_BUILD_UID}" --preserve-env\
-    PATH=${PATH}\
-    LD_LIBRARY_PATH=${LD_LIBRARY_PATH}\
-    HOME=${CI_BUILD_HOME}\
-    ${COMMAND[@]}
diff --git a/nnvm/tests/python/frontend/onnx/test_forward.py b/nnvm/tests/python/frontend/onnx/test_forward.py
index 3a7076d17b68868d6a2abeac0f5f6dc13f4a0036..9aef8b2cbe9e0d31168466322eaabfcad0695175 100644
--- a/nnvm/tests/python/frontend/onnx/test_forward.py
+++ b/nnvm/tests/python/frontend/onnx/test_forward.py
@@ -16,11 +16,12 @@ def verify_onnx_forward_impl(graph_file, data_shape, out_shape):
 
     def get_tvm_output(model, x, target, ctx, dtype='float32'):
         new_sym, params = nnvm.frontend.from_onnx(model)
-        shape_dict = {'input_0': x.shape}
+        input_name = model.graph.input[0].name
+        shape_dict = {input_name: x.shape}
         graph, lib, params = nnvm.compiler.build(new_sym, target, shape_dict, params=params)
         m = graph_runtime.create(graph, lib, ctx)
         # set inputs
-        m.set_input('input_0', tvm.nd.array(x.astype(dtype)))
+        m.set_input(input_name, tvm.nd.array(x.astype(dtype)))
         m.set_input(**params)
         m.run()
         # get outputs
diff --git a/nnvm/tests/python/frontend/onnx/test_graph.py b/nnvm/tests/python/frontend/onnx/test_graph.py
index 89f13b447991124af106481e6ab7496a645d57fd..7fa705ef4c655d68b564136b9684bd792962e911 100644
--- a/nnvm/tests/python/frontend/onnx/test_graph.py
+++ b/nnvm/tests/python/frontend/onnx/test_graph.py
@@ -9,7 +9,8 @@ def compare_graph(onnx_file, nnvm_sym, ishape):
     onnx_sym, params = nnvm.frontend.from_onnx(onnx_model)
     g1 = nnvm.graph.create(onnx_sym)
     g2 = nnvm.graph.create(nnvm_sym)
-    ishapes = {'input_0': ishape}
+    input_name = onnx_model.graph.input[0].name
+    ishapes = {input_name: ishape}
     graph_attr.set_shape_inputs(g1, ishapes)
     graph_attr.set_shape_inputs(g2, ishapes)
     g1 = g1.apply("InferShape").apply("SimplifyInference")
diff --git a/nnvm/tests/scripts/task_frontend_test.sh b/nnvm/tests/scripts/task_frontend_test.sh
deleted file mode 100755
index 5881d1846a0fa35d73a1927a2860c6d3dc2d7832..0000000000000000000000000000000000000000
--- a/nnvm/tests/scripts/task_frontend_test.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-
-export PYTHONPATH=python:tvm/python:tvm/topi/python
-
-echo "Running ONNX frontend test..."
-python -m nose -v tests/python/frontend/onnx || exit -1
-
-echo "Running MXNet frontend test..."
-python -m nose -v tests/python/frontend/mxnet || exit -1
-
-echo "Running Keras frontend test..."
-python -m nose -v tests/python/frontend/keras || exit -1
diff --git a/nnvm/tests/scripts/task_lint.sh b/nnvm/tests/scripts/task_lint.sh
deleted file mode 100755
index a6774a189f35a4915cb93ec774f11770804870cc..0000000000000000000000000000000000000000
--- a/nnvm/tests/scripts/task_lint.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/bin/bash
-echo "Check codestyle of c++ code..."
-make cpplint || exit -1
-echo "Check codestyle of python code..."
-make pylint || exit -1
-echo "Check documentations of c++ code..."
-make doc 2>log.txt
-(cat log.txt| grep -v ENABLE_PREPROCESSING |grep -v "unsupported tag") > logclean.txt
-echo "---------Error Log----------"
-cat logclean.txt
-echo "----------------------------"
-(cat logclean.txt|grep warning) && exit -1
-(cat logclean.txt|grep error) && exit -1
-rm logclean.txt
-rm log.txt
diff --git a/nnvm/tests/scripts/task_python_docs.sh b/nnvm/tests/scripts/task_python_docs.sh
deleted file mode 100755
index 94fa73e5e2de5963482333e5b629d801576fb9ee..0000000000000000000000000000000000000000
--- a/nnvm/tests/scripts/task_python_docs.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-mkdir -p docs/_build/html
-# C++ doc
-make doc
-
-rm -rf python/nnvm/*.pyc python/nnvm/*/*.pyc
-
-cd docs
-PYTHONPATH=../python:../tvm/python:../tvm/topi/python make html || exit -1
-cd _build/html
-tar czf docs.tgz *
-mv docs.tgz ../../../
diff --git a/nnvm/tests/travis/run_test.sh b/nnvm/tests/travis/run_test.sh
deleted file mode 100755
index ecce1f4d5a27e3feb5c47f24ce5f52a4beceb8d7..0000000000000000000000000000000000000000
--- a/nnvm/tests/travis/run_test.sh
+++ /dev/null
@@ -1,58 +0,0 @@
-#!/bin/bash
-
-
-if [ ${TASK} == "lint" ]; then
-    make lint || exit -1
-    echo "Check documentations of c++ code..."
-    make doc 2>log.txt
-    (cat log.txt| grep -v ENABLE_PREPROCESSING |grep -v "unsupported tag") > logclean.txt
-    echo "---------Error Log----------"
-    cat logclean.txt
-    echo "----------------------------"
-    (cat logclean.txt|grep warning) && exit -1
-    (cat logclean.txt|grep error) && exit -1
-    exit 0
-fi
-
-
-if [ ! ${TRAVIS_OS_NAME} == "osx" ]; then
-    # use g++-4.8 for linux
-    if [ ${CXX} == "g++" ]; then
-        export CXX=g++-4.8
-    fi
-fi
-
-if [ ${TASK} == "cpp_test" ]; then
-    make -f dmlc-core/scripts/packages.mk gtest
-    echo "GTEST_PATH="${CACHE_PREFIX} >> config.mk
-    make test || exit -1
-    for test in tests/cpp/*_test; do
-        ./$test || exit -1
-    done
-    exit 0
-fi
-
-# run two test one for cython, one for ctypes
-if [ ${TASK} == "python_test" ]; then
-    make clean
-    make -j all || exit -1
-    if [ ${TRAVIS_OS_NAME} == "osx" ]; then
-        python -m nose tests/python/unittest/ || exit -1
-        python3 -m nose tests/python/unittest/ || exit -1
-    else
-        nosetests tests/python/unittest/ || exit -1
-        nosetests3 tests/python/unittest/ || exit -1
-    fi
-
-    make cython || exit -1
-    make cython3 || exit -1
-
-    if [ ${TRAVIS_OS_NAME} == "osx" ]; then
-        python -m nose tests/python/unittest/ || exit -1
-        python3 -m nose tests/python/unittest/ || exit -1
-    else
-        nosetests tests/python/unittest/ || exit -1
-        nosetests3 tests/python/unittest/ || exit -1
-    fi
-    exit 0
-fi
diff --git a/nnvm/tests/travis/setup.sh b/nnvm/tests/travis/setup.sh
deleted file mode 100755
index 88010a0b3d7e3a027c63f05146cb0a8fb75caf62..0000000000000000000000000000000000000000
--- a/nnvm/tests/travis/setup.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/bin/bash
-
-
-if [ ${TRAVIS_OS_NAME} == "osx" ]; then
-    brew update
-    brew install python3
-    if [ ${TASK} == "python_test" ]; then
-        python -m pip install --user nose numpy cython
-        python3 -m pip install --user nose numpy cython
-    fi
-fi
-
-if [ ${TASK} == "lint" ]; then
-    pip install --user cpplint 'pylint==1.4.4' 'astroid==1.3.6'
-fi
diff --git a/nnvm/tests/travis/travis_after_failure.sh b/nnvm/tests/travis/travis_after_failure.sh
deleted file mode 100755
index a9bf588e2f88457fdf73ac7361ef1d596fb81453..0000000000000000000000000000000000000000
--- a/nnvm/tests/travis/travis_after_failure.sh
+++ /dev/null
@@ -1 +0,0 @@
-#!/bin/bash
diff --git a/python/tvm/_ffi/libinfo.py b/python/tvm/_ffi/libinfo.py
index 25d111d1f74a8a77a1915a0f6abcce25bf0f7f91..b449e712c32c3fcf75177a19a84ee4b4bae2e366 100644
--- a/python/tvm/_ffi/libinfo.py
+++ b/python/tvm/_ffi/libinfo.py
@@ -100,4 +100,5 @@ def find_lib_path(name=None, search_path=None, optional=False):
 
 
 # current version
-__version__ = "0.3.0"
+# We use the version of the incoming release for code that is under development
+__version__ = "0.4.0"
diff --git a/tests/ci_build/Dockerfile.gpu b/tests/ci_build/Dockerfile.gpu
index 2ee5ed04e91fb04e434a75f2a7563db24b3cb449..43b3ca7b394a4d69b6df4f19a39814d134c211db 100644
--- a/tests/ci_build/Dockerfile.gpu
+++ b/tests/ci_build/Dockerfile.gpu
@@ -28,13 +28,26 @@ RUN cd recommonmark; python setup.py install
 # Enable doxygen for c++ doc build
 RUN apt-get update && apt-get install -y doxygen graphviz libprotobuf-dev protobuf-compiler
 
+COPY install/ubuntu_install_java.sh /install/ubuntu_install_java.sh
+RUN bash /install/ubuntu_install_java.sh
+
+COPY install/ubuntu_install_nodejs.sh /install/ubuntu_install_nodejs.sh
+RUN bash /install/ubuntu_install_nodejs.sh
+
+COPY install/ubuntu_install_rocm.sh /install/ubuntu_install_rocm.sh
+RUN bash /install/ubuntu_install_rocm.sh
+
+COPY install/ubuntu_install_opengl.sh /install/ubuntu_install_opengl.sh
+RUN bash /install/ubuntu_install_opengl.sh
+
+COPY install/ubuntu_install_vulkan.sh /install/ubuntu_install_vulkan.sh
+RUN bash /install/ubuntu_install_vulkan.sh
+
+
 # DL Frameworks
 COPY install/ubuntu_install_mxnet.sh /install/ubuntu_install_mxnet.sh
 RUN bash /install/ubuntu_install_mxnet.sh
 
-COPY install/ubuntu_install_onnx.sh /install/ubuntu_install_onnx.sh
-RUN bash /install/ubuntu_install_onnx.sh
-
 COPY install/ubuntu_install_coreml.sh /install/ubuntu_install_coreml.sh
 RUN bash /install/ubuntu_install_coreml.sh
 
@@ -44,6 +57,9 @@ RUN bash /install/ubuntu_install_keras.sh
 COPY install/ubuntu_install_darknet.sh /install/ubuntu_install_darknet.sh
 RUN bash /install/ubuntu_install_darknet.sh
 
+COPY install/ubuntu_install_onnx.sh /install/ubuntu_install_onnx.sh
+RUN bash /install/ubuntu_install_onnx.sh
+
 RUN pip install Pillow
 
 # Environment variables
@@ -53,3 +69,10 @@ ENV CPLUS_INCLUDE_PATH=/usr/local/cuda/include:${CPLUS_INCLUDE_PATH}
 ENV C_INCLUDE_PATH=/usr/local/cuda/include:${C_INCLUDE_PATH}
 ENV LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LIBRARY_PATH}
 ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}
+
+ENV LD_LIBRARY_PATH=/opt/rocm/lib:${LD_LIBRARY_PATH}
+ENV PATH=/node_modules/.bin:${PATH}
+ENV VULKAN_SDK=/usr/local/VulkanSDK/1.0.65.0/x86_64
+ENV PATH=${PATH}:${VULKAN_SDK}/bin
+ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${VULKAN_SDK}/lib
+ENV VK_LAYER_PATH=${VULKAN_SDK}/etc/explicit_layer.d
diff --git a/tests/ci_build/install/ubuntu_install_onnx.sh b/tests/ci_build/install/ubuntu_install_onnx.sh
index 138e70d712002f424c70e201380f46ce8af96158..517ea77ab81e145281c0e6de1717d170c8a29433 100644
--- a/tests/ci_build/install/ubuntu_install_onnx.sh
+++ b/tests/ci_build/install/ubuntu_install_onnx.sh
@@ -1,3 +1,4 @@
+# fix to certain version for now
 pip2 install onnx>=1.1.0
 pip3 install onnx>=1.1.0
 
diff --git a/tests/scripts/task_python_nnvm.sh b/tests/scripts/task_python_nnvm.sh
index 5bccc27f59ba765f89684a697fa8a642884ffb65..4d44ac0e14b3153b08c2244ad6640a809a1ae59e 100755
--- a/tests/scripts/task_python_nnvm.sh
+++ b/tests/scripts/task_python_nnvm.sh
@@ -11,10 +11,10 @@ python -m nose -v nnvm/tests/python/compiler || exit -1
 python3 -m nose -v nnvm/tests/python/compiler || exit -1
 
 echo "Running ONNX frontend test..."
-python -m nose -v tests/python/frontend/onnx || exit -1
+python -m nose -v nnvm/tests/python/frontend/onnx || exit -1
 
 echo "Running MXNet frontend test..."
-python -m nose -v tests/python/frontend/mxnet || exit -1
+python -m nose -v nnvm/tests/python/frontend/mxnet || exit -1
 
 echo "Running Keras frontend test..."
-python -m nose -v tests/python/frontend/keras || exit -1
+python -m nose -v nnvm/tests/python/frontend/keras || exit -1
diff --git a/tutorials/deployment/cross_compilation_and_rpc.py b/tutorials/cross_compilation_and_rpc.py
similarity index 99%
rename from tutorials/deployment/cross_compilation_and_rpc.py
rename to tutorials/cross_compilation_and_rpc.py
index f06bbfca64077f5b3841dead5909a65c9a4cfa25..3658e68fae0a8153e3648cc148add4020ab24363 100644
--- a/tutorials/deployment/cross_compilation_and_rpc.py
+++ b/tutorials/cross_compilation_and_rpc.py
@@ -238,8 +238,8 @@ print('%g secs/op' % cost)
 #    The target_host should be 'llvm -target=aarch64-linux-gnu'.
 #    But here we set 'llvm' to enable this tutorial to run locally.
 #
-#    Also we need to build the runtime with the flag `USE_OPENCL=1`.
-# build kernel (different from cpu, we need bind axis for OpenCL)
+#    Also we need to build the runtime with the flag `USE_OPENCL=1` to
+#    build the kernel (different from cpu, we need bind axis for OpenCL)
 #
 # The following functions shows how we can deploy CL
 def deploy_cl():
diff --git a/tutorials/deployment/README.txt b/tutorials/deployment/README.txt
deleted file mode 100644
index 16db717c5eeff4aad824e2fdd3f3e6ae17a44525..0000000000000000000000000000000000000000
--- a/tutorials/deployment/README.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Run and Deploy
---------------
diff --git a/tutorials/get_started.py b/tutorials/get_started.py
index 2c6165940d057a688de5cbd1dc3ee57bfff43103..de94827ab1e97be649fc5c836a037adfe3ecee0b 100644
--- a/tutorials/get_started.py
+++ b/tutorials/get_started.py
@@ -110,7 +110,7 @@ if tgt == "cuda":
 # function(including the inputs and outputs) as well as target language
 # we want to compile to.
 #
-# The result of compilation fadd is a GPU device function(if GPU is involved) 
+# The result of compilation fadd is a GPU device function(if GPU is involved)
 # that can as well as a host wrapper that calls into the GPU function.
 # fadd is the generated host wrapper function, it contains reference
 # to the generated device function internally.
diff --git a/tutorials/language/README.txt b/tutorials/language/README.txt
index c15ab3f334ff07fb572ac657e9d6ec83d739f398..6da8e3c57c1f1e3ee3fed394d9b1615ecaf52c86 100644
--- a/tutorials/language/README.txt
+++ b/tutorials/language/README.txt
@@ -1,2 +1,2 @@
-Lanuage and Schedules
----------------------
+Tensor Expression and Schedules
+-------------------------------
diff --git a/tutorials/nnvm/deploy_model_on_mali_gpu.py b/tutorials/nnvm/deploy_model_on_mali_gpu.py
index 53016607112dd9775a05a896c6896d70c3fbc244..3c0152b8c19955e65f0bb8d53557f91d5359f43e 100644
--- a/tutorials/nnvm/deploy_model_on_mali_gpu.py
+++ b/tutorials/nnvm/deploy_model_on_mali_gpu.py
@@ -101,7 +101,7 @@ port = 9090
 if not use_mali:
     # run server locally
     host = 'localhost'
-    port = 9092
+    port = 9095
     server = rpc.Server(host=host, port=port, use_popen=True)
 
 ######################################################################
@@ -169,7 +169,7 @@ out_shape = (batch_size, num_classes)
 # Compile The Graph
 # -----------------
 # To compile the graph, we call the :any:`nnvm.compiler.build` function
-# with the graph configuration and parameters. As we use OpenCL for 
+# with the graph configuration and parameters. As we use OpenCL for
 # GPU computing, the tvm will generate both OpenCL kernel code and ARM
 # CPU host code. The CPU host code is used for calling OpenCL kernels.
 # In order to generate correct CPU code, we need to specify the target
@@ -182,12 +182,14 @@ out_shape = (batch_size, num_classes)
 
 if use_mali:
     target_host = "llvm -target=aarch64-linux-gnu -mattr=+neon"
+    target = tvm.target.mali()
 else:
     target_host = "llvm"
+    target = tvm.target.cuda()
 
 # set target as  `tvm.target.mali` instead of 'opencl' to enable
 # target-specified optimization
-graph, lib, params = nnvm.compiler.build(net, target=tvm.target.mali(),
+graph, lib, params = nnvm.compiler.build(net, target=target,
         shape={"data": data_shape}, params=params, target_host=target_host)
 
 # After `nnvm.compiler.build`, you will get three return values: graph,
@@ -212,7 +214,7 @@ remote = rpc.connect(host, port)
 remote.upload(lib_fname)
 rlib = remote.load_module('net.tar')
 
-ctx = remote.cl(0)
+ctx = remote.cl(0) if use_mali else remote.gpu(0)
 # upload the parameter
 rparams = {k: tvm.nd.array(v, ctx) for k, v in params.items()}
 
diff --git a/tutorials/nnvm/deploy_model_on_rasp.py b/tutorials/nnvm/deploy_model_on_rasp.py
index 885d928c93930a16965ad8ed425f1d303f9df558..b14c8b052e653a62c8947e815cdf51c47ee4e65c 100644
--- a/tutorials/nnvm/deploy_model_on_rasp.py
+++ b/tutorials/nnvm/deploy_model_on_rasp.py
@@ -80,7 +80,8 @@ from tvm.contrib import graph_runtime as runtime
 #
 #      Loading runtime library /home/YOURNAME/code/tvm/lib/libtvm_runtime.so... exec only
 #      INFO:root:RPCServer: bind to 0.0.0.0:9090
-#
+
+
 ######################################################################
 # For demonstration, we simply start an RPC server on the same machine,
 # if :code:`use_rasp` is False. If you have set up the remote
diff --git a/tutorials/nnvm/from_darknet.py b/tutorials/nnvm/from_darknet.py
index 7a233f56b718a9f6e476c4bf64984b443f3f6cee..9613f023c1e9b69da4ec31fe01af0a812b5703f4 100644
--- a/tutorials/nnvm/from_darknet.py
+++ b/tutorials/nnvm/from_darknet.py
@@ -1,23 +1,18 @@
 """
-Tutorial for running Yolo-V2 in Darknet Models
-=====================
+Compile YOLO-V2 in DarkNet Models
+=================================
 **Author**: `Siju Samuel <https://siju-samuel.github.io/>`_
 
 This article is an introductory tutorial to deploy darknet models with NNVM.
-
-All the required models and libraries will be downloaded from the internet
-
-by the script.
-
+All the required models and libraries will be downloaded from the internet by the script.
 This script runs the YOLO-V2 Model with the bounding boxes
-
 Darknet parsing have dependancy with CFFI and CV2 library
-
 Please install CFFI and CV2 before executing this script
 
-pip install cffi
+.. code-block:: bash
 
-pip install opencv-python
+  pip install cffi
+  pip install opencv-python
 """
 from ctypes import *
 import math
@@ -40,12 +35,11 @@ else:
 ######################################################################
 # Set the parameters here.
 # Supported models alexnet, resnet50, resnet152, extraction, yolo
-######################################################################
+#
 model_name = 'yolo'
 test_image = 'dog.jpg'
 target = 'llvm'
 ctx = tvm.cpu(0)
-######################################################################
 
 def dlProgress(count, block_size, total_size):
     """Show the download progress."""
@@ -105,8 +99,8 @@ def download(url, path, overwrite=False, sizecompare=False):
 
 ######################################################################
 # Prepare cfg and weights file
+# ----------------------------
 # Pretrained model available https://pjreddie.com/darknet/imagenet/
-# --------------------------------------------------------------------
 # Download cfg and weights file first time.
 
 cfg_name = model_name + '.cfg'
@@ -142,7 +136,7 @@ sym, params = nnvm.frontend.darknet.from_darknet(net, dtype)
 
 ######################################################################
 # Compile the model on NNVM
-# --------------------------------------------------------------------
+# -------------------------
 # compile the model
 data = np.empty([batch_size, net.c ,net.h, net.w], dtype);
 shape = {'data': data.shape}
@@ -151,8 +145,8 @@ with nnvm.compiler.build_config(opt_level=2):
     graph, lib, params = nnvm.compiler.build(sym, target, shape, dtype, params)
 
 #####################################################################
-# Save the json
-# --------------------------------------------------------------------
+# Save the JSON
+# -------------
 def save_lib():
     #Save the graph, params and .so to the current directory
     print("Saving the compiled output...")
@@ -178,8 +172,8 @@ download(img_url, test_image)
 data = nnvm.testing.darknet.load_image(test_image, net.w, net.h)
 
 ######################################################################
-# Execute on TVM
-# --------------------------------------------------------------------
+# Execute on TVM Runtime
+# ----------------------
 # The process is no different from other examples.
 from tvm.contrib import graph_runtime
 
diff --git a/tutorials/nnvm/from_mxnet_to_webgl.py b/tutorials/nnvm/from_mxnet_to_webgl.py
index 8190b6a7e5509f7b648fa44827d86e4657ba44a3..7ff553bcb368ba91a3fd586e2b939a0e1930a93c 100644
--- a/tutorials/nnvm/from_mxnet_to_webgl.py
+++ b/tutorials/nnvm/from_mxnet_to_webgl.py
@@ -1,14 +1,11 @@
 """
-Quick Start - End-to-End Tutorial for NNVM/TVM Pipeline for OpenGL and WebGL
-============================================================================
+Deploy Deep Learning Models to OpenGL and WebGL
+===============================================
 **Author**: `Zhixun Tan <https://github.com/phisiart>`_
 
 This example shows how to build a neural network with NNVM python frontend and
-generate runtime library for WebGL running in a browser with TVM. (Thanks to
-Tianqi's `tutorial for cuda <http://nnvm.tvmlang.org/tutorials/get_started.html>`_ and
-Ziheng's `tutorial for Raspberry Pi <http://nnvm.tvmlang.org/tutorials/deploy_model_on_rasp.html>`_)
-To run this notebook, you need to install tvm and nnvm following
-`these instructions <https://github.com/dmlc/nnvm/blob/master/docs/how_to/install.md>`_.
+generate runtime library for WebGL running in a browser with TVM.
+To run this notebook, you need to install tvm and nnvm.
 Notice that you need to build tvm with OpenGL.
 """
 
@@ -50,13 +47,13 @@ run_deploy_local = False
 run_deploy_rpc = False
 
 # To run the WebGL deploy demo, set this flag to True.
-run_deploy_web = True
+run_deploy_web = False
 
 ######################################################################
 # Download a Pre-trained Resnet18 Model
 # -------------------------------------
 # Here we define 2 functions:
-# 
+#
 # - A function that downloads a pre-trained resnet18 model from Gluon Model Zoo.
 #   The model that we download is in MXNet format, we then transform it into an
 #   NNVM computation graph.
@@ -75,7 +72,7 @@ def load_mxnet_resnet():
 
     params : dict[str -> NDArray]
         The pretrained model parameters.
-    
+
     data_shape: tuple
         The shape of the input tensor (an image).
 
@@ -116,11 +113,11 @@ def download_synset():
           "596b27d23537e5a1b5751d2b0481ef172f58b539/" + \
           "imagenet1000_clsid_to_human.txt"
     file_name = "synset.txt"
-    
+
     gluon.utils.download(url, file_name)
     with open(file_name) as f:
         synset = eval(f.read())
-    
+
     print("- Synset downloaded!")
     return synset
 
@@ -432,7 +429,7 @@ def deploy_web():
 
     from tvm.contrib import emscripten
 
-    curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
+    curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(os.getcwd())))
     working_dir = os.getcwd()
     output_dir = os.path.join(working_dir, "resnet")
     if not os.path.exists(output_dir):
@@ -471,7 +468,7 @@ def deploy_web():
                     os.path.join(output_dir, "tvm_runtime.js"))
     shutil.copyfile(os.path.join(curr_path, "web/resnet.html"),
                     os.path.join(output_dir, "resnet.html"))
-    
+
     # Now we want to save some extra files so that we can execute the model from
     # JavaScript.
     # - data shape
diff --git a/tutorials/nnvm/from_onnx.py b/tutorials/nnvm/from_onnx.py
index c5680f4ae816e04b7a571070f0a831e33478e98e..8fb5a1048569978526ee1710d300798cdc424cb7 100644
--- a/tutorials/nnvm/from_onnx.py
+++ b/tutorials/nnvm/from_onnx.py
@@ -66,7 +66,9 @@ x = np.array(img_y)[np.newaxis, np.newaxis, :, :]
 # We should be familiar with the process right now.
 import nnvm.compiler
 target = 'cuda'
-shape_dict = {'input_0': x.shape}
+# assume first input name is data
+input_name = sym.list_input_names()[0]
+shape_dict = {input_name: x.shape}
 graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)
 
 ######################################################################
@@ -78,7 +80,7 @@ ctx = tvm.gpu(0)
 dtype = 'float32'
 m = graph_runtime.create(graph, lib, ctx)
 # set inputs
-m.set_input('input_0', tvm.nd.array(x.astype(dtype)))
+m.set_input(input_name, tvm.nd.array(x.astype(dtype)))
 m.set_input(**params)
 # execute
 m.run()
diff --git a/tutorials/nnvm/using_external_lib.py b/tutorials/nnvm/using_external_lib.py
index f599438f402b85ef5ac0d2356d44255d9ba0f042..fd00768b93be2a14ddc1fb5f2e2c0c4b39800a75 100644
--- a/tutorials/nnvm/using_external_lib.py
+++ b/tutorials/nnvm/using_external_lib.py
@@ -1,6 +1,6 @@
 """
-Using external libraries with NNVM
-=====================
+Using External Libraries in NNVM
+================================
 **Author**: `Masahiro Masuda <https://github.com/masahi>`_
 
 This is a short tutorial on how to use external libraries such as cuDNN, or cuBLAS with NNVM.
@@ -24,7 +24,7 @@ from nnvm.testing import utils
 
 ######################################################################
 # Create a simple network
-# ---------------------------------------------
+# -----------------------
 # Let's create a very simple network for demonstration.
 # It consists of convolution, batch normalization, and ReLU activation.
 
@@ -40,7 +40,7 @@ net, params = utils.create_workload(simple_net, batch_size, data_shape[1:])
 
 ######################################################################
 # Build and run with cuda backend
-# ---------------------------------------------
+# -------------------------------
 # We build and run this network with cuda backend, as usual.
 # By setting the logging level to DEBUG, the result of NNVM graph compilation will be dumped as pseudo code.
 import logging
@@ -151,7 +151,7 @@ out_cuda = out.asnumpy()
 
 ######################################################################
 # Use cuDNN for a convolutional layer
-# ---------------------------------------------
+# -----------------------------------
 # We can use cuDNN to replace convolution kernels with cuDNN ones.
 # To do that, all we need to do is to append the option " -libs=cudnn" to the target string.
 net, params = utils.create_workload(simple_net, batch_size, data_shape[1:])
@@ -192,14 +192,14 @@ out_cudnn = out.asnumpy()
 
 ######################################################################
 # Verify the result
-# ---------------------------------------------
+# -----------------
 # We can check that the results of two runs match.
 
 np.testing.assert_allclose(out_cuda, out_cudnn, rtol=1e-5)
 
 #####################################################################
 # Conclusion
-# ---------------------------------------------
+# ----------
 # This tutorial covered the usage of cuDNN with NNVM.
 # We also have support for cuBLAS. If cuBLAS is enabled, it will be used inside a fully connected layer (nnvm.symbol.dense).
 # To use cuBLAS, set a target string as "cuda -libs=cublas".
diff --git a/tutorials/nnvm/define_and_compile_model.py b/tutorials/nnvm_quick_start.py
similarity index 92%
rename from tutorials/nnvm/define_and_compile_model.py
rename to tutorials/nnvm_quick_start.py
index 45cefcd1c81dca4097f0f09125d1acb4d2f65068..30650701516caba7587a1304b292eff9dda5b52c 100644
--- a/tutorials/nnvm/define_and_compile_model.py
+++ b/tutorials/nnvm_quick_start.py
@@ -1,20 +1,17 @@
 """
-Quick Start - End-to-End Tutorial for NNVM/TVM Pipeline
+Quick Start Tutorial for Compiling Deep Learning Models
 =======================================================
 **Author**: `Yao Wang <https://github.com/kevinthesun>`_
 
 This example shows how to build a neural network with NNVM python frontend and
-generate runtime library for Nvidia GPU and Raspberry Pi with TVM. (Thanks to
-Tianqi's `tutorial for cuda <http://nnvm.tvmlang.org/tutorials/get_started.html>`_ and
-Ziheng's `tutorial for Raspberry Pi <http://nnvm.tvmlang.org/tutorials/deploy_model_on_rasp.html>`_)
-To run this notebook, you need to install tvm and nnvm following
-`these instructions <https://github.com/dmlc/nnvm/blob/master/docs/how_to/install.md>`_.
+generate runtime library for Nvidia GPU and Raspberry Pi with TVM.
+To run this notebook, you need to install tvm and nnvm.
 Notice that you need to build tvm with cuda and llvm.
 """
 
 ######################################################################
 # Overview for Supported Hardware Backend of TVM
-# -----------------------------
+# ----------------------------------------------
 # The image below shows hardware backend currently supported by TVM:
 #
 # .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tvm_support_list.png
@@ -52,7 +49,7 @@ print(net.debug_str())
 
 ######################################################################
 # Compilation
-# ----------------------------
+# -----------
 # Next step is to compile the model using the NNVM/TVM pipeline.
 # Users can specify the optimization level of the compilation.
 # Currently this value can be 0 to 2, which corresponds to
@@ -69,7 +66,7 @@ print(net.debug_str())
 #
 # We'll first compile for Nvidia GPU. Behind the scene, `nnvm.compiler.build`
 # first does a number of graph-level optimizations, e.g. pruning, fusing, etc.,
-# then registers the operators (i.e. the nodes of the optmized graphs) to 
+# then registers the operators (i.e. the nodes of the optmized graphs) to
 # TVM implementations to generate a `tvm.module`.
 # To generate the module library, TVM will first transfer the HLO IR into the lower
 # intrinsic IR of the specified target backend, which is CUDA in this example.
@@ -120,7 +117,7 @@ print(out.asnumpy()[0][0:10])
 
 ######################################################################
 # Compile and Deploy the Model to Raspberry Pi Remotely with RPC
-# ------------------------------
+# --------------------------------------------------------------
 # Following the steps above, we can also compile the model for Raspberry Pi.
 # TVM provides rpc module to help with remote deploying.
 #
@@ -130,9 +127,9 @@ print(out.asnumpy()[0][0:10])
 # :code:`use_rasp` to True, also change the host and port with your
 # device's host address and port number.
 
-# If we run the example locally for demonstration, we can simply set the 
-# compilation target as `llvm`. 
-# To run it on the Raspberry Pi, you need to specify its instruction set. 
+# If we run the example locally for demonstration, we can simply set the
+# compilation target as `llvm`.
+# To run it on the Raspberry Pi, you need to specify its instruction set.
 # `llvm -target=armv7l-none-linux-gnueabihf -mcpu=cortex-a53 -mattr=+neon`
 # is the recommended compilation configuration, thanks to Ziheng's work.
 
@@ -145,7 +142,7 @@ port = 9090
 if not use_rasp:
     # run server locally
     host = 'localhost'
-    port = 9090
+    port = 9099
     server = rpc.Server(host=host, port=port, use_popen=True)
 
 # compile and save model library
@@ -190,4 +187,3 @@ print(out.asnumpy()[0][0:10])
 if not use_rasp:
     # terminate the local server
     server.terminate()
-
diff --git a/tutorials/optimize/README.txt b/tutorials/optimize/README.txt
index 4118f94bd51fc95fdcb460352af0fd06a699049b..b051548c5351e57e918b775e68d4daf44911c84d 100644
--- a/tutorials/optimize/README.txt
+++ b/tutorials/optimize/README.txt
@@ -1,3 +1,2 @@
-Optimize Operators
-------------------
-
+Optimize Tensor Operators
+-------------------------
diff --git a/tutorials/optimize/opt_gemm.py b/tutorials/optimize/opt_gemm.py
index 44ee53a7339958dbd60d95ef3b959ac38081872e..803b81e7d2221e70d723c6adb96f30b9f6fdbdd3 100644
--- a/tutorials/optimize/opt_gemm.py
+++ b/tutorials/optimize/opt_gemm.py
@@ -174,7 +174,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True))
 
 ###################################################################################################
 # Loop Permutation
-# -------------
+# ----------------
 # If we look at the above IR, we can see the inner loop row data is vectorized and
 # B is transformed into PackedB. The traversal of PackedB is sequential now.
 # So we will look at the access pattern of A. In current schedule, A is accessed column by column
@@ -262,7 +262,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True))
 
 ################################################################################################
 # Write cache for blocks
-# --------
+# ----------------------
 # After blocking, the program will write result to C block by block, the access pattern
 # is not sequential. So we can use a sequential cache array to hold the block results and
 # write to C when all the block results are ready.
@@ -308,7 +308,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True))
 
 ###################################################################################################
 # Parallel
-# -------------
+# --------
 # Futhermore, we can also utilize multi-core processors to do the thread-level parallelization.
 
 s = tvm.create_schedule(C.op)