TensorFlow Basic Syntax

A free video tutorial from Jose Portilla
Head of Data Science at Pierian Training
Rating: 4.6 out of 5Instructor rating
81 courses
3,750,930 students
TensorFlow Basic Syntax

Learn more from the full course

Complete Guide to TensorFlow for Deep Learning with Python

Learn how to use Google's Deep Learning Framework - TensorFlow with Python! Solve problems with cutting edge techniques!

14:07:23 of on-demand video • Updated April 2020

Understand how Neural Networks Work
Build your own Neural Network from Scratch with Python
Use TensorFlow for Classification and Regression Tasks
Use TensorFlow for Image Classification with Convolutional Neural Networks
Use TensorFlow for Time Series Analysis with Recurrent Neural Networks
Use TensorFlow for solving Unsupervised Learning Problems with AutoEncoders
Learn how to conduct Reinforcement Learning with OpenAI Gym
Create Generative Adversarial Networks with TensorFlow
Become a Deep Learning Guru!
English [CC]
(gentle, melodic chimes) -: Welcome, everyone, to this lecture on TensorFlow basic syntax. Here we're going to learn the very basics of TensorFlow. We'll start off by actually creating tensors, just constant tensors, and then we'll go into computations and then running a session in TensorFlow. Let's open up a Jupyter Notebook and get started. All right, the first thing we're going to do is import TensorFlow. We're already pretty far into the course, but now is the very first time we actually get to use TensorFlow. And just to make sure you're using the same version I am in the environment file, go ahead and run this line right here. "Print tf.__version__", and it should have some variation of 1.3. So it doesn't matter if it says .0 here at the end, but make sure you're using TensorFlow 1.3. Future versions like 1.4 and 1.5., they may have very small slight syntax changes, so since we are just learning TensorFlow and I don't want you to get hung up on small syntax changes, go ahead and make sure you're using 1.3. Then once you fully understand TensorFlow, you can easily go on to a more updated version, in case you're watching this in the future. Let's start off by actually creating a tensor. So the word tensor, it's basically just a fancy word for an N-dimensional array. We'll start off by creating the most basic tensor as possible, and that is a constant. So I'm going to create a variable called "hello", and we'll say tf.constant. And I'm going to pass in a string here, we'll say hello and then I'm going to actually leave a space at the end, and then I'm going to create another constant here. We'll call it "world". It's also going to be tf.constant, and this will be, as you may have guessed, the string "world". So if I take a look at what type of object this is, it is not a string object, it is a TensorFlow Python framework ops, and then we have tensor. So this by itself, this variable right here, is a tensor object. So as you may have guessed, if I try to print the variable "hello", I am not going to get a string. Instead it's going to say, "Hey, this is a tensor, it's a constant, blah, blah, blah. The data type inside of this tensor is a string." It's not actually going to print out the word "hello". In order to actually get "hello" to print, what we need to do is run this sort of operation inside of a session, just like we did in our previous manual neural network. So the way we actually create a TensorFlow session is with the following command, we say "with tf.session", open and close parentheses, as S-E-S-S, colon, and then you can have a block of code here indented, And everything here inside of this is just going to be TensorFlow operations that you run. And the reason we use this keyword "with" is because that makes sure that we don't have to actually close the session. So this kind of automatically opens it, runs the block of code and then closes the session. Let's go ahead and do a simple run command. So we'll say sess.run and then I can create an operation here. So let's do a concatenation operation, basically "hello" plus "world". So we're going to run that, and since I actually didn't save it as a result, let's run this again, but assign it to result. And then outside of the session, I can then print the result, and it says "hello world". And if you're wondering what this B represents, right in front of this string, it just represents in Python 3 that this is a bytes literal, okay? And for our purposes, we don't really need to concern ourselves too much with this B. Continuing on, let's go ahead and explore the, more basics of TensorFlow. Let's perform another computation. Let's do something like addition, so I'm going to say A is equal to tf.constant, and I'm going to put a number here, like 10. We're going to create another constant, tf.constant, let's put 20, and then again, if I check the type of A, it, again is just a tensor, and if we do something like A plus B, the result right now, it says, "Hey, this is tf.tensor, add shape data type integer 32." If I run this again, A plus B, notice here that it's saying it's add_3, which means TensorFlow is actually somehow in the background keeping track of this. So it's actually numbering add two, add three. If I were to copy this and run it again, it kind of keeps track of how many times you're asking for this. Now keep in mind it hasn't actually executed these tasks because we didn't run them inside of a session. So let's actually run them inside of a session. We'll say with tf.session as S-E-S-S, we'll say result is equal to session.run, and then we can actually input the operation here, A plus B. Then if I check out my results, it's 30. 10 plus 20 is 30. Okay, so those are very basic computations. So let's go ahead and show you some more operations, and these operations that I'm going to cover, they're really more in line with kind of the TensorFlow version of NumPy operations. Remember with NumPy, we were creating matrices like zeros, ones, random normal distributions, random uniform distributions. So I'm going to create just a bunch of operations here that we can check out. I'm going to create a constant again, so we have a constant operation that's just for a constant number. Sometimes you need to have a matrix filled out, so you say, we'll say fill mat, and then I'm going to say tf.fill, And if you do shift-enter here, it says, "Hey, this is going to create a tensor." Remember that's just a fancy word for an N-dimensional array filled with a scalar value. And then we're going to provide it with what it wants. It wants the dimensions and the value it filled with. So we'll say, "Hey, gimme four-by-four filled with the value 10." So that's our filled matrix. Then we can say something like, my zeros, and then we have tf.zeros. That's another kind of quick operation TensorFlow gives you, and again, just creates a tensor with all elements set to zero. So let's give it the shape. We'll again ask for a four-by-four. We're going to do the same thing for ones, as you may have guessed, tf.ones. And let's go ahead, keep it four-by-four. And now let's show you just a few random distributions that you can do. So there's a random normal distribution, we'll call it myrand N. Keep in mind, everything on the left hand side of that equals sign is just the variable name. And then we're going to say tf.random. And as we begin to type "random", you can see there's a ton of options here. We'll explore the options as we need them throughout the course, but random, normal, that's kind of a more common one. So it just outputs random values from a normal distribution and you can actually provide the mean and standard deviation as well as the shape. So let's go ahead and do that. We'll just say, we've been doing four-by-four for everything, so let's continue with that trend. And we'll actually just keep the defaults, but in case you wanted to specify, you could say like mean is equal to zero standard deviation, I forget what the default was, I think it was 1.0. You can obviously change that as you see fit. And then a uniform distribution is also a very common distribution to be using. So we'll say random, and let's go with random uniform. And let's do the same thing here, four-by-four. And for a random uniform, instead of having a mean or standard deviation, it wants a minimum value and maximum value, where you basically draw from that distribution from zero to the max value. Or you know, if you want a negative minimum value, that's okay too, and it draws them in a uniform manner. So we'll say min value is zero, and we'll say max value is one. Okay, so we have a bunch of operations here. None of these have really been executed yet. So if you just call for one of them, like my zeros, you don't see anything. It just says, "Hey, this is a TensorFlow." It's kind of just waiting for you to execute it or run it in a session. So I'm going to create a list here called "my ops" which is going to be full of these operations. So we'll say whatever my const was, and let's say fill mat, just using tab to auto-complete this quickly. My zeros, my ones, my rand N and then my rand U. Okay, so now I have a list of all of these. So let's go ahead and run these inside of a session. So usually, we're always going to be using this with tf.session. That's pretty much how you always see it in the documentation. But I do want to introduce you to something called an interactive session. It's pretty useful for notebook settings like this Jupyter Notebook. It doesn't really have much use outside of a notebook setting, depending on how you actually are coding TensorFlow and whatever IDE you're using. But basically, if you use an interactive session it allows you to constantly call it throughout multiple cells. Let me show you how to do that. We really won't be using it throughout the course, but in case you're interested in it, now's a good time to introduce it to you. So you just say S-E-S-S is equal to an interactive session. And then basically the rest of these cells are going to kind of pretend that they're already being called with this, with tf.session. Again, this interactive session, really only useful for a notebook setting. So I'm going to say, for operation in that list, my ops, I'm going to say "session.run". And then we'll say "op". And let's actually print this out so we can see the results. Run that, and here we can see all the results. Let's add a new line in between them. So new line in between each result, and here we have it. So I can see that constant, I can see that filled matrix, remember, was a four-by-four of tens. My zeros matrix, my ones matrix and then my two random matrices. So again, the reason I was able to do this outside of the actual session was because I had this interactive session. It's really useful for a Jupyter Notebook environment, but to kind of stick with the actual documentation and all the other examples you see online, we'll pretty much always be using this with tf.session, unless it's a really quick job that I want to run between multiple cells. Okay, so we just have "sess.run op". Something to note is that a lot of these operations, they have an eval method on them. So we may see that in the future, where instead of saying session.run and then you pass in the operation usually if you put in "op" and then start calling eval, there's an evaluation method which is essentially telling it, "Hey, evaluate this operation." And you get the exact same results when you run that. Okay, so again, typically we'll be saying "session.run" instead of calling this eval. But kind of for something quick and dirty we may do an interactive session, just do .eval. All right, continuing on, the last thing I want to talk about is matrix multiplication. We use matrix multiplication a lot with neural networks, especially our basic neural networks. So let's create a matrix real quick. We'll have it be a constant, and we're going to feed this in as a kind of nested list. So we'll say one-by-two here, comma, and then let's go ahead and say three, four. So this is actually a two-by-two matrix, but it has one, two on the top row, three, four on the bottom row, just the nested list here. And then if I say A, I can call get shape off of this, and it says that the shape of this tensor is two-by-two, which makes sense, that's what we provided there. Let's go ahead and give one more constant, we'll say this constant is going to be, let's have it be a two-by-one. So we'll have the first number be 10, second number be 100, and this is where you may have to kind of refresh on linear algebra after we do this multiplication, but essentially we get the shape. This one's a two-by-one, so I'm going to say my result is equal to tf.matmul. Hopefully that looks a little familiar to you, based off our basic neural network when we implemented it. So I have my results here, and then I can say sess.run results and it gives me back the actual array. So it multiplied this two-by-two array by this two-by-one, And as a result, you get back a two-by-one. Now keep in mind, usually you'd have to run this within a session. It's only because I called this interactive session that I'm able to run it between multiple cells. Pretty useful for a Jupyter Notebook, not super useful anywhere else. Okay, and one last reminder is, I could have just said "eval" to see the results as well. That's the very basics of TensorFlow syntax. I really hope that kind of felt pretty familiar, especially after our manual neural network. Then you can see here TensorFlow framework, doing a lot of the heavy lifting behind the scenes for you. Main things you should have gotten outta this lecture is that you can create basic constants, operations, and then run them within a session. Thanks everyone, and I'll see you at the next lecture.