机器人与人工智能爱好者论坛

 找回密码
 立即注册
查看: 7380|回复: 0
打印 上一主题 下一主题

Lighting the way to deep machine learning

[复制链接]

292

主题

321

帖子

6150

积分

版主

Rank: 7Rank: 7Rank: 7

积分
6150
跳转到指定楼层
楼主
发表于 2016-6-26 14:57:10 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
Lighting the way to deep machine learning

2016.6.23

Open source Torchnet helps researchers and developers build rapid and reusable prototypes of learning systems in Torch.
Building rapid and clean prototypes for deep machine-learning operations can now take a big step forward with Torchnet, a new software toolkit that fosters rapid and collaborative development of deep learning experiments by the Torch community.
Introduced and open-sourced this week at the International Conference on Machine Learning (ICML) in New York, Torchnet provides a collection of boilerplate code, key abstractions, and reference implementations that can be snapped together or taken apart and then later reused, substantially speeding development. It encourages a modular programming approach, reducing the chance of bugs while making it easy to use asynchronous, parallel data loading and efficient multi-GPU computations.
The new toolkit builds on the success of Torch, a framework for building deep learning models by providing fast implementations of common algebraic operations on both CPU (via OpenMP/SSE) and GPU (via CUDA).
A framework for experimentation
Although Torch has become one of the main frameworks for research in deep machine learning, it doesn't provide abstractions and boilerplate code for machine-learning experiments. So researchers repeatedly code their experiments from scratch and march over the same ground — making the same mistakes and possibly drawing incorrect conclusions — which slows development overall. We created Torchnet to give researchers clear guidelines on how to set up their code, and boilerplate code that helps them develop more quickly.
The modular Torchnet design makes it easy to test a series of coding variants focused around the data set, the data loading process, and the model, as well as optimization and performance measures. This makes rapid experimentation possible. Running the same experiments on a different data set, for instance, is as simple as plugging in a different (bare-bones) data loader, and changing the evaluation criterion amounts to a one-line code change that plugs in a different performance meter. (More detailed information can be found at the Github repository here and in the Torchnet research paper here.)
Torchnet’s overarching design is akin to Legos, in that the building blocks are built on a set of conventions that allow them to be snapped together easily. The interlocked chunks make a universal system in which engaged pieces fit together firmly yet can be replaced easily by other pieces. We've also developed clear guidelines on how to build new pieces.
The open source Torch already has a very active developer community that has created packages for optimization, manifold learning, metric learning, and neural networks, among other things. Torchnet builds on this, and it is designed to serve as a platform to which the research community can contribute, primarily via plugins that implement machine-learning experiments or tools.
Powered for GPUs
Although machine learning and artificial intelligence have been around for many years, most of their recent advances have been powered by publicly available research data sets and the availability of more powerful computers — specifically ones powered by GPUs.
Torchnet is substantially different from deep learning frameworks such as Caffe, Chainer, TensorFlow, and Theano in that it does not focus on performing efficient inference and gradient computations in deep networks. Instead, Torchnet provides a framework on top of a deep learning framework (in this case,torch/nn) that makes rapid experimentation easier.
Torchnet provides a collection of subpackages and implements five main types of abstractions:

  • Datasets — provide a size function that returns the number of samples in the data set, and a get(idx)function that returns the idx-th sample in the data set.
  • Dataset Iterators — a simple for loop that runs from one to the data set size and calls the get()function with loop value as input.
  • Engines — provides the boilerplate logic necessary for training and testing models.
  • Meter — used for performance measurements, such as the time needed to perform a training epoch or the value of the loss function averaged over all examples.
  • Logs — for logging experiments.

The most important subpackages provide implementations of boilerplate code that is relevant to machine-learning problems. These include computer vision, natural language processing, and speech processing.
Other subpackages may be smaller and focus on more specific problems or even specific data sets. For instance, small subpackages that wrap vision data sets such as the Imagenet and COCO data sets, speech data sets such as the TIMIT and LibriSpeech data sets, and text data sets such as the One Billion Word Benchmark and WMT-14 data sets.
Example
This section presents a simple, working example of how to train a logistic regressor on the MNIST data set using Torchnet. The code first includes necessary dependencies:
  1. require ’nn’
  2. local tnt   = require ’torchnet’
  3. local mnist = require ’mnist’
复制代码

Subsequently, we define a function that constructs an asynchronous data set iterator over the MNIST training or test set. The data set iterator receives as input a closure that constructs the Torchnet data set object. Here, the data set is a ListDataset that simply returns the relevant row from tensors that contain the images and the targets; in practice, you would replace this ListDataset with your own data set definition. The core data set is wrapped in a BatchDataset to construct mini-batches of size 128:
  1. local function getIterator(mode)
  2.   return tnt.ParallelDatasetIterator{
  3.     nthread = 1,
  4.     init    = function() require 'torchnet' end,
  5.     closure = function()
  6.       local dataset = mnist[mode .. 'dataset']()
  7.       return tnt.BatchDataset{
  8.          batchsize = 128,
  9.          dataset = tnt.ListDataset{
  10.            list = torch.range(
  11.              1, dataset.data:size(1)
  12.            ),
  13.            load = function(idx)
  14.              return {
  15.                input  = dataset.data[idx],
  16.                target = torch.LongTensor{
  17.                  dataset.label[idx]
  18.                },
  19.              } -- sample contains input and target
  20.            end,
  21.         }
  22.       }
  23.     end,
  24.   }
  25. end
复制代码

Subsequently, we set up a simple linear model:
  1. local net = nn.Sequential():add(nn.Linear(784,10))
复制代码

Next, we initialize the Torchnet engine and implement hooks that reset, update, and print the average loss and the average classification error. The hook that updates the average loss and classification error is called after the forward() call on the training criterion:
  1. local engine = tnt.SGDEngine()
  2. local meter  = tnt.AverageValueMeter()
  3. local clerr  = tnt.ClassErrorMeter{topk = {1}}
  4. engine.hooks.onStartEpoch = function(state)
  5.   meter:reset()
  6.   clerr:reset()
  7. end
  8. engine.hooks.onForwardCriterion =
  9. function(state)
  10.   meter:add(state.criterion.output)
  11.   clerr:add(
  12.     state.network.output, state.sample.target)
  13.   print(string.format(
  14.     'avg. loss: %2.4f; avg. error: %2.4f',
  15.     meter:value(), clerr:value{k = 1}))
  16. end
复制代码

Next, we minimize the logistic loss using SGD:
  1. local criterion = nn.CROSsEntropyCriterion()
  2. engine:train{
  3.   network   = net,
  4.   iterator  = getIterator('train'),
  5.   criterion = criterion,
  6.   lr        = 0.1,
  7.   maxepoch  = 10,
  8. }
复制代码

After the model is trained, we measure the average loss and the classification error on the test set:
  1. engine:test{
  2.   network   = net,
  3.   iterator  = getIterator(’test’),
  4.   criterion = criterion,
  5. }
复制代码

More advanced examples would likely implement additional hooks in the engine. For instance, if you want to measure the test error after each training epoch, this may be implemented in the engine.hooks.onEndEpoch hook. Making the same example run a GPU requires a few simple additions to the code — in particular, to copy both the model and the data to the GPU. Copying data samples to a buffer on the GPU3 can be performed by implementing a hook that is executed after the samples become available:
  1. require 'cunn'
  2. net       = net:cuda()
  3. criterion = criterion:cuda()
  4. local input  = torch.CudaTensor()
  5. local target = torch.CudaTensor()
  6. engine.hooks.onSample = function(state)
  7.   input:resize(
  8.       state.sample.input:size()
  9.   ):copy(state.sample.input)
  10.   target:resize(
  11.       state.sample.target:size()
  12.   ):copy(state.sample.target)
  13.   state.sample.input  = input
  14.   state.sample.target = target
  15. end
复制代码

Collaborative intelligence
The goal of open-sourcing Torchnet is to empower the developer community, allowing it to rapidly build effective and reusable learning systems. Experimentation can flourish as prototypes are snapped together more quickly. Successful implementations can be easily reproduced, and bugs are diminished.
We hope that Torchnet channels the collaborative intelligence of the Torch community so we can all work together to create more effective deep learning experiments.


回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

站长推荐上一条 /1 下一条

QQ|Archiver|手机版|小黑屋|陕ICP备15012670号-1    

GMT+8, 2024-6-2 07:25 , Processed in 0.058693 second(s), 23 queries .

Powered by Discuz! X3.2

© 2001-2013 Comsenz Inc.

快速回复 返回顶部 返回列表