banner



What Is A Tensor In Machine Learning

Recently years, with the new tensor Machine LearningTools (such as TensorFlow) is a very hot topic, in those who seek to larn and utilise machine learning seems even more and then. However, when yous back in history, you volition find some basic just powerful, useful and feasible method, they also take advantage of the ability of the amount of sheets, and not in Depth studyScene. The following will requite a specific explanation.

If the calculation is traditional, then the value is calculated using linear algebra is i of the most important one. Such as LINPACK and LAPACK package is already very onetime, but today they are in any course is very powerful. The core, composed of linear algebra and a very unproblematic routine operation relate to them (hither we call vector or matrix) for repeated multiplication and addition operations on a one-dimensional or two-dimensional array. Meanwhile apply unusually broad range of linear algebra, rendering the image from the computer game to many different bug such equally nuclear weapons design it can exist resolved or approximate calculation,

The fundamental linear algebra operations: linear algebra used in the well-nigh basic computer dot product of two vectors (dot product). This dot product is just the product of the two vectors and related elements. A product of a matrix and a vector tin exist regarded equally the matrix and vector rows (Row) of the dot product, the product of ii matrices can be considered as a matrix for each column (cavalcade) of a matrix and the other matrix - vector multiplication and. Further, matched one by one addition and multiplication with a value for all elements, we tin construct a linear algebra operations required for the machine.

The reckoner program adamant the reason rely on fast speed value written in linear algebra, in part because of regularity of linear algebra. Further, another reason is that they can be candy in parallel in large quantities. Full potential in terms of performance, from the early Cray-1 (Translator'due south Notation: Cary-1 is one of the earth's commencement supercomputer, congenital in 1975, computing speed 100 million times per second) to today's GPU computer, we can see the performance increment of more than 30,000 times. In addition, when you have to consider a lot of GPU processing cluster data, its potential operation, at minimal cost, the fastest estimator in the earth than always college past about a million times.

Nonetheless, the history of the pattern is always the same, that in club to take full advantage of the new processor, we will make computing more and more abstruse. Cray-1, and it needs to run its programs can exist used to quantify the successors vector operations (such as dot production) to play all the performance hardware. Later on machines require information technology to a matrix - matrix or vector operations - operations to the matrix algorithmFormal, so merely the value of the hardware to play equally much as possible.

We are now standing on such a node. The difference is that we do non have annihilation across matrix - matrix functioning approach, namely: our use of linear algebra has reached the limit.

However, we do not need to limit yourself on linear algebra. The fact that we can climb up some branches of mathematics along this tree. For a long time, people know that at that place are bigger fish than the matrix in mathematical abstraction of the bounding main, which is a candidate tensor (tensor). Tensor is of import mathematical foundation of full general relativity, in addition to its other branches of physics, it likewise plays a fundamental role. Then as matrix and vector mathematical concepts can be simplified into an assortment that we use in the computer, like, if we can simplify and tensor also characterized as multidimensional arrays and some related operations information technology? Unfortunately, things are not and so simple, which is mainly due to the absence of an obvious and simple (equally in matrix and vector like) a series of operations that tin be performed on tensor.

However, there is good news. Although nosotros tin non use but a few operations on tensor. Merely we can write an operation mode (pattern) on the tensor. Yet, this is not enough, because it can not be fully executed every bit efficiently as they write programs written in these modes. Simply we have some other skillful news: those inefficient but simple programs can exist written in (substantially) can be automatically converted into a very efficient implementation of the programme.

More praise is, this conversion may not need to build a new programming linguistic communication can exist achieved. It requires just a simple pull a fast one on you can, when nosotros write the following lawmaking TensorFlow in:

v1 = tf.abiding(iii.0)

v2 = tf.constant(4.0)

v3 = tf.add(node1, node2)

The reality is that the system will create a data structure as shown in Figure i:

Figure i: The above code is translated into a data structure may exist reconstructed, and it volition be transformed into car-executable form. Code translates user-visible data structure allows us written program can be rewritten so as to be performed more than efficiently, it tin also be calculated or a derivative, so that loftier-level optimizer may be used.

The program data structure is not above show us the bodily execution. Therefore, TensorFlow before we have a chance to actually run it, rewrites the data structure more efficient lawmaking. This may involve small or big structures we desire the computer to handle. Information technology can also exist generated on the calculator CPU we use, the use of cluster, or any bachelor GPU devices at mitt the actual executable code. It is very commendable matter is, nosotros can write very unproblematic but tin be achieved unexpected results of the program.

Still, this is just the first.

Practice something useful, simply not the aforementioned thing

TensorFlow like it and the system uses a completely describing machine learning architecture (such equally depth Neural Networks) Program, and then adjust the parameters of that compages to minimize the number of error value. They characterize a data structure past creating our programme, and a information structure with respect to the characterization of our model fault values ​​of all parameters of the gradient to achieve this. The presence of the gradient function makes it easier to optimize.

However, although you can use TensorFlow or Caffe or whatever other manner of operation is substantially the same compages to write the program, just the plan is not necessarily going to write you optimize machine learning role. If you write a programme using a parcel of your choice (package) provided tensor marking, then it can optimize all types of programs. Automatic differentiation and more than advanced compiler and optimizer for efficient GPU code is still in your favor.

For a unproblematic example, Figure II shows a simple model of a household energy consumption.

Figure two: This figure shows the daily energy consumption firm (circles), the horizontal axis represents temperature (degrees Fahrenheit). A piecewise linear model is superimposed on the energy consumption specific information. Logically parameter model for the formation of a matrix, but when we have to deal with millions of models, we tin can use tensor.

This figure shows the energy usage of the house, and this has been modeled. Get a model is not difficult, just to find this model, we demand to write code to each of the free energy consumption of millions of house modeling task. If you utilise TensorFlow, we can immediately build houses for all of these models, and we can utilize this model than before to get more than efficient optimizer. So, nosotros tin can immediately optimize the model room of millions, and its efficiency is much higher than before our original programme. Theoretically we can optimize the code manually, and may exist derived artificial derivative function. Nonetheless, the time needed to complete the work, and more importantly, I will spend time debugging the model can not be established for a limited time.

This example shows us a tensor-based calculating organisation, such as TensorFlow (or Caffe or Theano or MXNet, etc.) can be used for deep learning and very different optimization bug.

So, this may exist the example for you, better use of machine learning software in addition to the completion of car learning also tin do many other things.

Original link: http: //www.kdnuggets.com/2017/06/deep-learning-demystifying-tensors.html

Source: https://programmersought.com/article/70122471003/

Posted by: porternoust1988.blogspot.com

0 Response to "What Is A Tensor In Machine Learning"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel