Open & Closed Captioning Effectivity on a Budget

Learn Open and Closed Captioning, the tools and workflow you need to implement Captioning in your video projects.
4.7 (30 ratings)
Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
2,635 students enrolled
Instructed by Sami Rahman Business / Media
Free
Start Learning Now
  • Lectures 83
  • Length 2 hours
  • Skill Level All Levels
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 8/2013 English

Course Description

This course will help you understand both open and closed captioning and how to create a workflow for you to caption your videos. The course will cover:

1. Introduction and Objectives

2. Steps in the Captioning Process

3. Planning

4. Workflow

5. Transcripts and Syncing Software

6. Caption Software

7. Viewing Caption Files

8. Extras

What are the requirements?

  • Basic Working Knowledge of Video, Audio and Text Process
  • Be able to Install Software
  • Mac or PC

What am I going to get from this course?

  • Define Captioning
  • Outline the Benefits
  • Define a Process for Captioning
  • Software to Use

What is the target audience?

  • Udemy Instructors
  • YouTube Content Creators
  • Anyone who want to Expand their Audience

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Introduction and Objectives
01:16

Welcome to the course "Open and Closed Captioning Effectively on a Budget". My name is Sami Rahman, and I'm the founding bridgingapps.org. We're a community of parents, teachers, doctors, therapists, and people with disabilities, who share information on how to use mobile device technology like iPads and android devices with those with disabilities. In this process we use a lot of video to both educate and communicate and we needed to find a way to both efficiently and cost effectively caption video for mass distribution on the internet. What you're going to see in this course is our process for doing it. So who is this course for? This course is for anyone who wants to learn how to effectively and efficiently caption video for distribution on the web, via desktop, or mobile devices.

00:14
The objectives of this course are threefold. First, we want to define captioning. Second, we want to outline the benefits of captioning. Third, we want to define a process.
01:58

What is captioning? Captioning is the process of taking the spoken word out of media and converting it from an audio signal into text. Further, it is including audio descriptors like "music", "shouting volume", and other descriptors, that enhance the text and provide the same [meaning] level of meaning that the audio would for someone without an audio impairment.

There are two outcomes or products from the captioning process. The first one is Open Captioning, and that is when you take a piece of media, you transcribe the audio and you add your audio description. You sync the audio, you sync the text with the image, and you actually take the text and burn the text onto the video image. So, the final product is a video stream, video image, that has the text burnt right on it. So you can play that in any player that plays video.

The second outcome of captioning is Closed Captioning, and this is the one that we're most familiar with. Closed Captioning takes the text and audio descriptors and adds it, encodes it into a stream in the video file like an audio track. Then, depending on the player, and I think we're most familiar with captioning in television, you can turn on or off the caption track.

As we'll learn later, this has a lot of advantages and disadvantages. For the sake of this conversation, Open Captioning is the text burnt on the image; Closed Captioning is the text as a separate track within the video file.

02:23

Who benefits from captioning? Generally speaking, we think that captioning is just for people with audio impairments, and that's true. The World Health Organization estimates that there are 360 million people worldwide with a disabling hearing loss. So, as a content creator, that's a pretty big audience in and of itself.

The next group of people that captioning helps is those with learning impairments. Now, I personally have dyslexia and dysgraphia; and so I will often use captioning if it's available. I will turn it on to help me compensate for my own disabilities. Now I don't have a total number of statistics of those with learning impairments; however, the National Dissemination Center for Children with Disabilities estimates that one in five people, at least in the United States, has some sort of learning disability, for example dyslexia or dysgraphia. Again, as content creators, that expands our audience in ways that are huge.

The next reason you would want to caption and also provide transcripts in your notes is that for non-native speakers, they can take the transcript and in the case of Google, as an example, you can actually translate the caption on the fly to any of Google Translates languages. As a content creator, you are expanding your audience in ways that you didn't even know were possible.

Last but not least, if you think of the way we learn, and as content creators we think about what's so attractive about video. Video gives you auditory enforcement, it gives you video enforcement, it gives you text enforcement. So the caption further enforces the concepts, the learning concepts, within your content, so even if you don't suffer from some sort of learning impairment or audio impairment, it can be good just for you as a learner and ultimately it makes your content better. Not only does it expand your audience, but it ultimately makes your content better. So captioning is just good content dissemination practices.

01:46

What is the captioning process? The captioning process has three main steps. The first step is, once your video is complete, there is a transcription process. Here, the concept is that you want to take the spoken words in your video or presentation or whatever you're captioning, and convert them into text along with the audio descriptors. Text alone doesn't allow you to automatically play that with the video in sync.

The next step of the process is to sync the transcript to the video itself. You can think about this as timing the transcript. In order for the words to come out of the right time on screen you need to be able to create a timing file, a sync file, that actually has a line of text and when it's supposed to go in on screen and when it's supposed to go off screen. This is where, hopefully we'll simplify this process dramatically for you, there are a lot of different sync file formats out there, and so this took us awhile to figure out and we think we got a real flexible process.

The last step in the process once you've transcribed the audio and synced the audio is to now merge those files together into either your Closed Caption file or your Open Caption file. That's the captioning process itself. Now oftentimes you're going to hear in the captioning process that you're burning in the caption. You're going to hear it, also, as subtitling. Those are all other ways of describing the captioning process.

Terminology
Article
Section 2: Steps in the Process
00:31
Transcripts are really two things. We take video, as you can see on the left, and we describe actions within the video, that are not necessarily apparent, visually. So, for example, screaming or music. Those are two pretty good examples. Then, we take the spoken word out of the video and also convert that into text. So we've got action and the spoken word converted to text.
01:03

There are two ways of creating transcripts. The first one is the Human Conversion Process. Generally, this is attributed to being more accurate, particularly in the case of proper nouns or names. The second sort of benefit is the process allows for correction. So, as I'll contrast later in the machine process, the human can go back if they have a question they can re-review the audio and they might change it.

The third attribute is the human process is able to handle variances, both in user speech patterns as well as accents. You can also think about it in terms of annunciation. The human process is also more accurate when it comes to audio descriptors. So, let's contrast that.

01:27

The second way of doing transcripts is a machine conversion process. So, I'll give you an example. If you upload a video to YouTube and don't upload a transcript, YouTube will actually attempt to transcribe, using a speech-to-text conversion, your video. The advantage of it is it's much faster than the human process of transcription, but it tends to be less accurate. Now, this process is getting better and better as we understand more of the variances within the human voice and computer technology just gets better. But I, as an example, have a horrible, apparently, voice when it comes to machine transcription because no matter what I say, it never comes out right. So I don't know if I have a speech impediment or what's going on, but YouTube does not like me.

Next, there's also a limit in terms of the variance of handling. So, its ability to handle speech impediments, like I said, strong accents, pauses. It just doesn't interpret the way a human being interprets, it's verbatim. And it's very, very limited in terms of audio descriptors, being able to detect music, although that one's fairly easy, but shouting versus not shouting, and other descriptors that might be on there. It just isn't able to detect those very well, and so you just don't get a very accurately described transcription.

01:48

Syncing. 

Syncing is the act of, or the process of taking the transcript and adding the timing elements to it. So you can see on screen what we have here is this actress has done action and she's speaking. What we want to do is start with a text file, and end with a sync file and add the timing. This has implications later on, but let's start with timing for a second.

Timing generally is measure from 00:00:00. So the minute, the hour, the second, the frame. It starts at the beginning of the video, and when the first bit of spoken word is started or action is started, then you begin the timing process. You can see here in our timing file, we start at zero and we end at 10 seconds, and that is the entire length of when that action is supposed to appear.

Now, because the sync file is relative, this becomes incredibly important later on; because a sync file is relative to that specific video, if you edit the video later you need to also edit the sync file later. If your video goes from 10 minutes to 20 minutes, but your sync file is only 10 minutes long then you're likely missing some spoken text. If your video goes from 10 minutes to 5 minutes, you're going to have to go into your sync file and take out the text that's been edited. So, in effect we're creating two files that you need to manage. Your video file and your sync file, by incorporating a caption process.

00:45

So, let's talk about syncing. There's two times in which you can do syncing. First of all, you can do it at the time of transcription. I'm going to show you a process in this course that will allow you to transcribe your video file and create the sync automatically as you transcribe, which is incredibly efficient.

Or, if for example, you outsource your transcription process, but want to in-source your syncing process, you are going to have transcripts that are text files that need to have sync files created in which case you have a separate process for syncing those files to the transcript. I will show you in this course how to do that as well.

00:46

Types of syncing. Again, we have human syncing. So in this course we're going to go through a process for human syncing; and then we also have machine based syncing.

So if your final output, for example, is YouTube all you have to do is have a transcript. You don't even have to have a sync file of the transcript. But all you do is upload your video and you upload your transcript and YouTube will actually do the syncing of the audio itself using its machine based technology. And what's really nice about this is that it cuts a lot of the steps of the process out.

But if you're not using YouTube and you want to deploy your files in, let's say, a mobile device like an iPad or on the desktop, then you're going to have to do some sort of human syncing.

00:32
Captioning, this is the third step in the process. We have our sync file; it has our timing. It has our actions, and it has our spoken words. Now we need to merge that into the video file. So we're going to start with two files, a sync file and a video file, and we're going to end with a single video file. That video file will either be Open or Closed Captioned. By creating that video file, it's called encoding the video.
00:47

Let's talk about Closed Captions first. Because Closed Captioning is a separate track inside the video file, not burnt onto the video image, then one of the advantages is you can have one or more caption tracks within your video.

You can distribute video, for example, that's captioned in multiple languages: French, English, Spanish. Think about this in terms of a DVD. A DVD will often have subtitles in multiple languages and it will also have audio tracks in multiple languages, as well.

You can see, one of the major advantages of Closed Captioning is you can choose to turn it on and off. You can also choose to incorporate multiple files in there. So, let's talk about those advantages.

01:50

So we talked about multiple content sources. So this is the idea that within a single video distribution you can have multiple audio tracks and multiple caption tracks, often called subtitles, right? If you think about it in terms of translation, it's called a subtitle track; and the user can then, which is our second point here, the user can then control which of those tracks they want to view at any one given time.

One of the nice things also about Closed Captioning because it is a text file that's embedded inside of a video file certain search engines will actually pick up the text file and index the entire text file. So imagine your video being indexed for all of the words in the video and so, in terms of SEO, which stands for Search Engine Optimization, a caption file, depending on your search engine, can be very, very effective in terms of, so think about in terms of an uncaptioned video file, in terms of a search index, if it's an uncaptioned video file then the only thing that the search engine can do is index the title and maybe the description. If the search engine also indexes the caption file, it has the title, the description, and the entire body of the video to actually do your searching on. So this can expand and help your videos rank much, much higher in search engine result.

Lastly, in the case of YouTube, because you have a text tract inside of the video, YouTube will take that transcript and actually translate it on the fly and I'll show you within this course how to do that. We'll actually translate on the fly into any language. So this takes your content and can disseminate your content across multiple languages as well. So that's a huge advantage.

02:35

Let's talk about the disadvantages of Closed Captioning. The primary disadvantage as the content creator that you have to manage when you Closed Caption is that you have to make sure that your output will display properly in whatever viewer your users have, whatever player your user has. When you look at the history of this, of Closed Captioning, what you'll find is because the caption track is inside the video, whoever created the player typically created a separate kind of sync file that they felt was the best sort of standard; although there's no standards.

What you have out there in the market place is tons and tons of standards without necessarily any clear global standard. I say that; the reality of it is that there is a standard, and that's within the broadcast world. There are standards in terms of how, because in the United States at least, there is a broadcast standard for captioning, because all stations who broadcast on open airways have to caption their videos.

Recently, if the video's ever been broadcast it also has to be captioned if it's being distributed electronically on the internet. There are very elaborate standards when it comes to broadcast. Because we're not broadcasters, and we're just talking about electronic media, then there aren't standards for just passing a caption file to all the viewers. We'll go through a process in the course that gives you the most flexibility in that process. The key here concept is that the player has to play it, and you have to, as the creator, manage that process.

The next thing is that one of the disadvantages is unlike when you burn a text to the video itself, you get total control over that text. In Closed Captioning, the player is what controls how the text looks on screen.

Generally speaking, this is not an issue, but there are times when a player might put text a little high on the screen, or too small for our user. It's just one of the things that you need to test to make sure and understand what product you're giving your end users, because it may not be exactly what you want it to be, or what you intend it to be.

00:44

Let's talk about the encoding process for Open Captions. The way that this works is, you have a piece of software for Open Captioning, and what that software does is it renders the text as a video image that is transparent. So you have text on top, and then the entire rest of the background is transparent.

Then, it takes the video source and merges frame by frame the text onto the video so that when you are finished you only have one video track, and that video track has both the text on it and the video image.

00:51

So let's talk about the advantages. One of the primary advantages of Open Captioning is it's universal. As long as a player can play the video you don't have to worry about if the video player plays that video and that caption track. And so, you can create your video and fix it once, and it can go out to any different system. It also, for those systems that don't manage captioning, it allows you to caption on any system that player doesn't allow you to caption.

The other, primary advantage of Open Captioning is, you, as the creator, get to control the way that it looks on screen. So you can put it exactly where you want it on screen. It doesn't, may not cover up. It can be the size you want. It can be bigger, it can be smaller. However you want it to look, you get total control and that can be very important.

01:04
The disadvantages, of Open Caption. So, the first disadvantage, may or not be a disadvantage for you; it only allows you to put one source, on, the image at a time. Now, you could have, for example in that source, two different lines. One line being one language and another line being another language. You can have three different lines, but you can only ever have one source at a time. You can't put, one language and then superimposed another language on top because you wouldn't be able to read it. Other element of this is you take control away from your user, when you do this. So there are times, like, for example, when you're a learner, when you have someone with learning impediment. They may want to have the captions on part of the time but not all of the time, and so you take that control when you Open Caption away from that user. Depending on your application, it may not be a good thing.
Section 3: Planning
03:05

General Workflow Considerations. Since you're a content creator already, you realize that creating a video requires a lot of storage space. So, when you are now introducing additional video streams that you then need to caption; so you're going to have your uncaptioned video and you're going to have your captioned video; and you may then further encode it another time, you're introducing the need for greater storage. So that's important.

Another consideration is versions. So as an example, I may Open Caption for a course like Udemy, but Close Caption when I take the same video and put it on YouTube. Another example would be if I have a video of a certain length that I want to use in one channel and then I want to shorten that video for another channel. In all those scenarios I'm going to have various versions of both the video file, and now transcripts and sync files. So, it becomes exponential in terms of version control.

So my advice here is, make sure when you're incorporating your caption process to incorporate some sort of version control. For me, I just keep things in folders. I label the folder very extensively. I label the file very extensively and by looking at the file in the folder I know exactly where that file is in the whole process.

The next major workflow consideration for captioning is that captioning is an additional time-consuming process. So you have to be able to build into your process time to transcribe, to sync, and to re-encode the video. Depending on your volume and/or your quality, so volume of video becomes an issue and quality of video becomes an issue, it can require some pretty heavy process because you're going to be taking a video, let's say you do Open Captions. You're going to be taking a video stream that could be of really high quality, then creating a high-quality text version and burning a new high-quality video version with that text superimposed on top. That just requires a lot of raw horsepower. You need to consider how much processing equipment you might need and you may or may not have the right equipment for that.

Generally speaking, if time isn't an issue then you likely already have all of the equipment you need. You just need to add more time to your process. If time becomes an issue, you may use techniques and hardware to accelerate the encoding process. Or you may get a new computer to be able to do higher volumes in a shorter amount of time.

02:03

When to Caption. Conventional wisdom says that you break captioning and do captioning after you create your video. So you create the video. You edit the video. You add your graphics, get all your narration down, get everything exactly the way that you want it. You export that file, and then caption off of, and transcribe, sync and burn off of that final output file.

The reason why conventional wisdom says to do that, is so that you don't have to edit the sync file and re-burn your image if you make changes in the middle. So, in the creative process we make changes all the time. You want to get all those changes done, so that all you're having to do is manage your video output and then manage your captioning process secondarily.

Now, in the course that I'm teaching you, we will show you how to edit files later. So there are all sorts of times when you have a final video output, let's say that's ten minutes long, and you want to shorten that video into a bumper or something shorter for another distribution channel.

So we'll show you a process that allows you to do that when you need to do that. But you want to minimize the effect of doing that, because managing one file and its changes can be problematic. Now, imagine managing multiple sync files and multiple video files. It just introduces the ability for more error.

So you have the creation process, then you have the caption process. Now, in the caption process we've got the Transcribe, Sync, and you'll see the words "Burn". I want to make a note here. Captioning is often referred to as burning, but what it really is, is encoding video into the image, and then you have your final encoding. So you may put your text in, in case of Open Captions and then do your final encoding separately. Or in the case of Closed Captioning, you may add your Closed Caption track and do your final coding at the time of adding the Closed Caption track. So, final encoding may be an optional step depending on what your workflow is.

00:09
In the next set of slides we're going to go through considerations for insourcing and outsourcing, and we're going to talk about the different kinds of outsourcing that you can do.
01:55

Outsourcing Considerations. The first one is it can be a lot faster than if you are going to try to do it in house. I outsource part of the process. It can not only be faster, but in my case I'm a horrible typer and so it can also be a lot more accurate. I get a better, faster, cheaper product by outsourcing the transcription than if I were to do it in house.

Considerations on the con can be it can add additional costs. So, I do now have a new invoice that I have to pay for every video that I do. It can add additional time. You have to do a new quality control, so if you insource the process and do it all internally you may have good quality controls along the way so you don't have to introduce a new quality control step.

It could be a security risk, now, this is interesting. If you aren't captioning video. Let's say you are doing a set of internal training videos on a brand new product that no one is using in the market, and you don't want that information to be publicly available. You may choose to insource all of your transcription because you feel like that information is just too risky to be let out in public. You may also use one of those securities as a criteria for picking a transcript company, in terms of confidentiality.

In my formal life I worked in the legal field around confidential and management information. You would hear of these horror stories, where corporations had documents leaked that they didn't want to have leaked. If the content of your videos are secure you may want to consider insourcing versus outsourcing.

The other element here is depending on how much of the process you outsource. If you outsource the whole process, you are going to be uploading high quality video files, so this is going to require more bandwidth. So, you need to make sure you have bandwidth for getting the files from one place to another. Now, that could be as simple as large hard drives if you are doing it all locally, or internet bandwidth if you are uploading.

00:50

Generally, there are two type of transcription services that will help you caption.

The first one is a traditional transcription service. At the minimum, they'll provide you with the transcript. Now, that may be all you need. You may want to sync in-house and burn in-house and code in-house. Or, the company that I use not only provides me with the transcript, but for a nominal fee up and above the transcript, they also provide me with a sync file. It's so nominal, and the transcript price is so great, that I just get both the transcript and the sync file done and I outsource that.

The next kind of company is one that specializes in video. They'll not only do the transcript, they'll also provide the sync file and, if you provide them the video, they'll encode it for you, either Open or Closed Caption.

01:50

Insourcing considerations. When you are keeping this in house, one of the pros is cost. And you really have to be honest with yourself. Is it really cost or invoice? In order to have someone who is not trained to do transcripts sit there and do transcripts, likely the cost is very high to the company. Now, there may not be an invoice generated to it, but there's a cost. It's something, that in terms of budgets, that you have to manage and see what your budget will be able to do.

In terms of control, you get lots and lots of control when you do it in house. The quality can be higher. For example, if you're using a lot of terminology or you have a speech impediment or you have a heavy accent, your quality, if you insource, can actually be higher than if you outsource. And security can be, you can control security as well. Now, there are all sorts of cases where, you know, internal employees leak documents in a company. So security is something you're going to have to manage, if that's a concern.

Now, the cons. It's going to require more storage space. You are going to have to be your own technical support. If you're a one person company producing videos for, let's say, Udemy or YouTube, you've got to ask yourself, do you want to be your own technical support? So if you insource the whole process you're also going to have to be your own technical support.

Now again, we're going to show you a process that's both outsource and in-source and get high quality and manage it with very few technical issues. However, when an issue does arise you're the one that's going to have to fix it; versus if you outsource it, not your problem.

You're going need a little bit more powerful equipment if you're burning your images in, as we discussed earlier. And you're going to need to be very, very, very organized. I cannot reiterate that enough. Now, I would argue you need to be organized whether you outsource or insource, because now you're going to be receiving files from an outside world. So organization is critical, no matter what. You need to be extra organized if you're going to in-source everything.

01:20

Transcript formats. So there are a number of different formats you can get your transcripts, either you request from an outsourcing provider or create internally. What I highly recommend is that your process keeps everything in .txt form or text format. The first reason is that it's easy to edit. Most any application or word processing application reads a text file. The second thing is that there are very few or no hidden characters. So if you think about this in terms of, I'm going to contrast a text file with, let's say, an .rtf, a rich text file. So a rich text file allows you to pass bold, highlight, underscore font information along with the file itself.

Now, all of those are characters that are hidden inside the text stream, and those tell the program when to bold, when and what fonts to use, all those other things. That's a lot of information that your sync file doesn't need to have. The act of saving an .rtf into a text format strips it of all that information. So you want pure audio descriptors and spoken words in your files and you want to cut everything else out. So that's a really key thing to keep them in text form. And the other one is, it's universal. I can pass the text file from a Mac to a PC. I can email it without adding other information in it, or it changing.

01:50

Sync file format. It took me about a year to figure out that I wanted to standardize on SubRip or SubRip subtitle format, often known as .SRV. There are literally dozens and dozens of sync file formats out there. Each one trying to do the same thing which is time text to video.

The history on SubRip or SubRip sub title is it is a program on Windows which that was adopted fairly early on when people were trying to take DVDs and take them off of disks and onto their computer and play them on their computer. They wanted to bring the caption files along. It was a way of playing multiple caption files with the DVD. It has now sort of caught on in the open source world.

So, let's get to why SubRip. First of all it's easy to edit. A SubRip file or a .SRT is not a binary file. It's a text file that just happens to be named .SRT. So, that means you can literally open it and edit it in anything, and it's not going to ruin the file. There are no hidden characters because it's a text file. It's a text file formatted in a certain way, but it's still none the less a text file. It has very solid open source support. So a lot of the applications I'm going to show you today are open source, and .SRT is one of the primary file formats that translate between all these open source tools.

Lastly, it's widely adopted on devices and platforms and viewers allows us to cast the broadest net when it comes to small organizations or individuals creating video for mass distribution on the internet and on desk tops and mobile devices. So, SubRip, .SRT, that's the file format we are going to use for syncing.

01:26

Video formats There, like sync files, are a ton of different video formats out there. Generally speaking, if you're a Windows person, you're used to AVIs or WMVs Windows files. The problem with AVIs is that they tend to be very large and they tend to only play very well on a Windows device. Why motion JPEG or MPEG4? MPEG4 tends to have very, very small files, with relatively high, high quality. When you take an SRT file and burn it into an MPEG4 file, you'll find, that it has widespread support on a lot of different devices and a lot of different platforms. An MPEG4 with an SRT burned into it will play on a Windows device. It will play on a Mac. It will play on an iPad. It will play on an Android. I'll show you how all those work in this course. With every format, there is positives and negatives. The reason why we settled on an MPEG4 format is it's a small file that's high quality, that a lot of different viewers can view and so that gives us the widest distribution and the most flexibility. Another note, because it's a very small file, uploading it to services like YouTube or Udemy, is it's much shorter and you don't use much bandwidth.

Section 4: Workflows
00:15
Let's talk about what our Workflows would look like for YouTube and other services like Udemy and Vimeo. I'm going to break this down into services that you can Closed Caption for and then services that you Open Caption for.
01:32

The first one specifically, being Closed Caption Workflow for YouTube. YouTube is unique, if all I'm ever going to do is put a video on YouTube I just need a video that's not captioned and I need text hint transcript and that's it. I'll show you in the course later on in the video section, how to upload your transcript and then how Google will automatically sync the two.

In terms of a Workflow, the caption level you are only going to need to transcribe your video. The format you're going to need is a text transcript, so just .txt. The video format will be MP4 or MOV. MOV is a QuickTime format. MP4 and you are going to load your MP4 up into YouTube. Then you are going to load your text file.

In terms of applications to get you there, if you are going to outsource you are going to outsource the whole transcript. If you are going to insource you can transcribe your transcript in a product called Aegisub and there's a whole series of tutorials on how to use Aegisub and in terms of YouTube, YouTube will do the captioning for you.

YouTube is unique because it has closed captioning as part of its viewer, and I think just because its volume you have to treat YouTube separately. There are a lot of advantages to this. YouTube indexes your transcript, where it does not necessarily index its own automatic transcript. So you get better search results. I would also suggest as a side note with YouTube, here's a tip; also copy and paste your transcript below your description as well.

01:19

Most online viewers have their, either have no Closed Captioning capability at all, or have their own custom captioning file.

If you come across a viewer that has its own custom captioning file then you might need to take a process of taking your SRT file and converting it into that caption format, and the viewer should have information about how to go about doing that.

In terms of the captioning process, you're going to transcribe, you're going to sync and you're going to burn and encode your image. You are going to open caption your image. In terms of the file formats that you're going to need. You're going to need an SRT file and you're going to need a video that is in MPEG4 format.

So you're going to transcribe and sync in as you sub, if you outsource, if you're insourcing the whole thing or you can get a transcript and a sync file if you're outsourcing it. And then you'll going to either use RoadMovie or XviD4PSP to take your SRT file and encode it and burn it into the image, so that you have a final output of MPEG4 so that you can upload to your service.

The beauty of this process is it doesn't matter what the service is you can then take that video and upload it to every service that plays MPEG4, and all of them do, and you are fully captioned with total control.

00:03
Next, we're going to discuss Desktop Viewers.
01:03
QuickTime and VLC Workflow. Now QuickTime is the standard media player on the Mac, and VLC is an open source media player that is on the PC, it's on Linux, and it's also on the Mac as well. There's also mobile versions of VLC as well, it is a very, very widely used ope source media player. If I was creating files, in my workflow for these target applications would be, I'm going to go through the whole process, I've got to transcribe, sync, and burn, I'm going to need a SRT file, again, back to the mpeg 4, because that's the one we've standardized. And in terms of Workflow, what we're going to do is we are going to either outsource the transcription and the syncing, or we're going to do the syncing and transcription in Aegisub, we are going to Closed Caption in HandBreak, or if we want to Open Caption, to give us complete control and even broader distribution, we would do that in either RoadMovie on the Mac, or XviD4PSP on the PC.
01:31

Windows Media Player. Now, Windows Media Player will play Closed Captioned videos, however, it will not play, every time correctly, an MPEG4 Closed Captioned video. According to my research, you should be able to take the MPEG4, it won't read the captioned file in the MPEG stream, however, if you keep the SRT file in the same directory as the MPEG4 file and you name the files the same thing, Windows Media Player will automatically grab that SRT file and play it on screen. In my own testing, I was never able to validate that.

My advice to you is to Open Caption when you're going to Windows Media Player. Take that responsibility away from your user, and have a file that you know will be captioned. Use Aegisub to do your transcription and your captioning or Outsource and then use RoadMovie or XviD4PSP to Open Caption your MPEG4. Again, the advantage of the MPEG4 is that you are going to have a very, very high quality file that's also very, very small, as compared to an AVI file, and that MPEG4 file can be used on a lot of different devices as well. It gives you the broadest possible distribution channels when you do it this way. 

00:09
Mobile devices. So if you want a caption for Android and iOS, that's what we're going to be doing next.
00:39

An iOS Workflow. What I mean by iOS device, this is iPad, iPhone, iPod Touch, again back to our standards .SRT file and .MP4 file. Because some of the advantages of iOS is being a closed platform and a proprietary platform, then creating captions for it is very, very easy.

So, you are going to do your syncing and your transcription in Aegisub or outsource those, and you are going to close caption using a product called HandBrake. This is an open source product and later on I'll show you videos on how to do that. HandBrake closed captions very, very quickly. HandBrake is also Mac, PC, and Linux.

00:29
Closed Captioning for Android. Now, what I will tell you is, there are two elements here. I would consider closed captioning for Android, if you're gong to be distributing it on any Android device that's 4.1 or above, because the video player will play a captioned .mp4 file.
00:59

If you cannot control what Android device is going to be playing on, so if you're not just doing this for a single user or a set of users in the classroom and you're going to do mass distribution, then I would open caption, because Android 4.0 or below, the default media player, there are many other media players on the Android that will play, but because the Android is fairly open, different hardware manufacturers will have different media players on there.

For the broadest distribution and if you can't control what device is being played on, then just open caption using RoadMovie if you're using a Mac or XviD for PSP if you are encoding on a PC to create an open caption file.

This would also apply I should say to Blackberry as well as Nokia or other smartphone platforms.

Section 5: Transcripts & Syncing Software
00:43

For transcripts and syncing I use one tool. Aegisub. This is an open source tool made for the Mac, PC and Linux systems. What is really great about this tool is it is used to make Karaoke videos. So how cool is that when it comes down to it. You know if you can sing. If you can get up on stage and sing to a song. You are just cool in my book.

It's Aegisub.org. It does a ton of things in terms of both the transcription process and the syncing process, and outputs into a variety of different formats including .SRT. The next series of videos is going to be an introduction to Aegisub, and how to not only create the video, translate it into different languages, to use it for transcription and syncing

05:03

Aegisub is an open source software that was designed to have people like us create subtitles for almost a ton of different applications. It also was originally designed to create karaoke, so yay, right? Awesome, right. There's just nothing more awesome than that.

It is a very well supported piece of software. It has a long history and is considered the premiere in what is called fan subbing, which is creating your own subtitles in movies. Often used in Asian animation stuff. So this is a really elaborate piece of software. It does so many more things than we're going to use for closed captioning, but it does it a lot of great things and it's open source and freely available.

So you can see here, we're on the website aegisub.org and it is available in multiple platforms; Windows, iOS 10 and Linux; both 32-bit and the 64- bit. And then also the source code if you want to modify it yourself. So an incredible piece of software. What you need to do to be able to use it is download it. In my case I've downloaded the OS 10 version and installed it on my computer, so that's what you need to do next, is install it on your computer.

So, I've already done that, and let me launch up the program. When you first open Aegisub, it is very intimidating. I was very intimidated, me personally, by all of the options. I didn't know what any of it meant and so rest assured we're going to go through what you need to know in terms of closed captioning and being able to modify captions later on. And so this is a great...I constantly find new uses for it. And I don't think I even scratch the surface on the features that it has.

So I'll show you the features that are good for closed captioning and then that will be a good basis of learning to have it grow with you. So the first thing we're going to do if you... so I'm just going to do a really quick brief introduction.So the first thing we would do if we want to caption a video, as an example, is we would come up here and Open Video and I'm just going to go and open this video that needs captioning, and it's got a lot of real estate. And if you think about captioning, captioning is less about the video, although the video can be important, it is also about the audio.

So what we're going to do is, so I've taken my file here and I just did it with my mouse as I scrolled over, but I'm going to minimize that, the video, to the smallest amount. And then I'm going to take the audio and I'm going to take the audio out of the file, because I'm really going to be captioning against the audio. So this is the video window here on the left and this is the audio window on the right and you can see here that the audio file extends fairly far, it's two and a half minutes, three minute long video.

So, if I want to zoom in and out of the audio timeline I can zoom in and out. If I want to raise or lower these, this is on the spectrum, I can do that by raising and lowering that, and you'll want to adjust these that are right for you but basically you can see that I'm not talking here and I'm going to say something here and I'm talkin

03:46

As I use this program, there's a couple of settings, about five of them, that we're going to to change in the preferences that I have found by making these tweaks in the preferences. I've really found to be really effective for captioning. So, let's do that.

First thing I'm going to do is I'm going to click on as you Aegisub and I'm going to go under the About menu and I'm going to hit Preferences. And, the first one that I'm going to show you is, I'm actually going to go to the interface. And, depending on your screen resolution this may or may not be a setting you want to change. But, what I did was I changed the font size for the editing box to be size 20 and I changed the grid size to size 20. I think the default... here let me reset the defaults.

The defaults are 12 and 13. I just found this too small. In the screen capture it looks okay, but when I'm using a really big monitor, I just found it too small to read. My eyes just aren't that good. So, I'm changing these back up to 20 and you just do that by clicking on the... and then you hit apply. And then you can see that it increased the size here and it, you can't see it but it increased the size of the comment once the in the editing box.

The other settings that I changed are under the audio and that is the default timing length and this is in milliseconds. So, two thousand milliseconds is two seconds. Before I change that I want to change two other ones. This is the lead-in length and the lead-out length and what those are is there are some really cool buttons that allow you to just, when you hit the button, you hear the last little bit of the audio clip of that segment or you hear the first little bit of the audio segment. And, this gives you the length of those. I changed these to make them a little longer because I needed them to be a little longer. So, the first one I'm going to change to a 400 milliseconds. So, this is the lead-out. So, when I go to play the last little bit of the clip it's going to give me 400 milliseconds; which is what? That's like almost a little over a third, one third of a second long. For the lead-in I just changed it to 300 and that seems to be the setting that works for me. If you want it a little longer lead-in then change it to a higher amount. 1,000 is a full second so 500 milliseconds would be half a second. And so this is like a third of a second. So, that works really well as a lead-in. When you lead-out, I like the little extra length it helps me acclimate to the audio and this could be just because I have an audio acquisition issue. I'm not quite sure but just making it longer is better for me.

Now, the other one is generally speaking when you caption you caption typically in about three second blocks. And so, this is now set to two second block and so I want to change it to a three second block. And there seems to be a slight little bug in this when if you if I type in three and hit apply and hit save it doesn't save. So, you have to sort of... let me see if that will work. Yeah. So, typing three seconds and then typing up and down one will sa

05:34

So in this video what we're going to do is we're going to transcribe the video and we're going to use Aegisub as a transcription tool. It is also at the same time, and what's nice about this, at the same time it's going to sync the audio, and time it as well.

So let me show you how to do that. I've launched up Aegisub. This happens to be on a Mac. It should look the same on a PC and on the Linux system. The first thing I need to do is load up my video. So I'm going to open my video and I'm going to pick this as an example. This is fairly large in terms of screen real estate. While I do suggest that you do descriptors in your video, like when music plays you type in Music, I don't need the video to be this large so I'm going to minimize this down to 25 percent.

What I really need is I need to really see the audio. So under the audio file I'm going to open audio from video and what this does is it opens up the video file and it extracts the audio out of it. So let me play you a section here.

This is a Waveform monitor of the actual audio itself and I can use these zoom controls to zoom in or zoom out. And those will become, as you need to get more accurate transcription, those will become really important. And I can also decrease and increase the Waveform to fit. Sometimes it's good to see it like this, sometimes it's real good to get in there and zoom in like that. It sort of enlarges. So you can use these transport controls, these zoom controls quite a bit. Not quite a bit but once you kind of figure out what works for you, you'll go from there.

You can see that this is a big long section. If I play it you can hear it. [music] It's music. And I can even play this little section. Notice I just grabbed and dragged, I clicked and dragged. So that's all music.

If I want to bring this whole section and just tell Aegisub that this whole section is music, notice what I did. I grabbed the left and right. So that's the end point and then this was the beginning point right over here. And I want to just tell Aegisub that this whole section, and notice when I change this, notice what happens down here. Start now is at one second, .21 milliseconds. So this starts at zero and ends at 6:32, and it's all music. So I can just come in here into my text section and put bracket. So this is a descriptor. Music, bracket.

Anytime you're describing anything like music or any on screen movement that's important to your video, you got to think about it in terms of primarily people who don't have their audio, their hearing, then you put it in brackets and often times caps are used to distinguish an action from someone's speech.

So here's music and you can see that it comes in here. Now what I have to do is I have to tell Aegisub that I'm ready to move to the next line, effectively, but the next piece of text. So when you see this little check mark, if I click yes, I click that check mark, what happens is, a lot of things happen. I'm going to click it. It advanced to the next part of the video and notice that it advanced up here to my video. It gave me three second

08:57

We're going to use Aegisub to sync a preexisting transcript to a video file. So the first thing I'm going to need to go to get started is find my transcript. And in this scenario what we're assuming is that you got your transcript done through a transcription service that didn't necessarily have audio-video sync. Or, that this is a later date and now you need to just take the transcript and sync it to the video and maybe the video was edited a little bit.

So the first thing we're going to do is we're going to start by loading up our transcript. So let me go to my transcripts files. So there's transcripts, I'm going to pick a text-only file and you can see that this is a straight transcript of just straight text file, nothing magical here. And I'm going to load it up in my video. And the way I'm going to do this is I'm just going to grab this video file and just drag and drop it into the grid.

Now you can see, what Aegisub just had this done is ask me, "Do I have any [inaudible 0:00:59] separators?" and "Do I have any comment starters?" In this particular case, it was a straight text file and so there's no actors and there are no starters so I can just click through this; I can ignore that.

And what it's done here is, it's just loaded up line for line my text file. Now, just a comment, a slight tip here; What you don't want is, you don't want continuous run-on paragraphs, because you're going to want this broken up into smaller chunk, and your text file having carriage returns, in terms of the paragraphs, because that's how it knows to move to a new line. Otherwise you're going to be a long run on paragraphs, and then you're going to have to do a lot more editing down here.

So I've got my text loaded and you can see that everything is zero to zero. So now I've got to bring in my video. So let me open up my video here and I'm going to reduce the size down to 25 percent. And I'm going to extract my audio, so that I have my audio file.

So the first thing that we have here is if I play this first segment here; let me highlight this for a second and play it.

[music] Well it's music, but I don't have music here on my transcript, so what I need to do is I need to add a line. I highlighted the first line, I right clicked and Insert Before, I'm going to insert another line here. And I'm going to call this music. So I highlighted to get my zero to six seconds, and I'm going to bring in and put brackets since music is an action and not a user voice; I'm going to put a bracket and in all caps I'm going to write music. And then close bracket, and then I'm going to click "OK".

I've now added it to my first section, so let's slide down. Now we need to find "Let's start with the Nexus" and end with "Over here." So let's select this, and my start, so I hit let's set up, that's correct.

And lets listen to the end sequence, so I'm going to play this.

[Video]: Let's set up, the Nexus 10.

Let's set up the Nexus 10. So I've got the next part but I'm missing this whole other section. So that's likely down here. I'm just going to drag my cursor all the way do

03:09

In this video I'm going to show you how to export out of Aegisub into the SubRip closed caption file format. What I have here is that, I'm in Aegisub, I have my video, I have my audio, and I've got my transcription all done. What I want to do from here is just export my file into the SubRip format.

Let me go into Aegisub. Click on Aegisub, under File, Export As, I have the export dialogue box. What we have here, we have templates for different things, karaoke, transform. In terms of captioning and in terms of a SubRip file, you're not going to use any of these filters.

The next thing you need to decide is what sort of text encoding you want. I use Unicode-8. If you are using Mandarin or other double byte Asian languages you're going to want to go up to a Unicode-16 or in some cases Unicode-32. The larger the file size, let's say if you were to save English in Unicode-32, it just makes for a bigger file.

Text files are pretty small so it's not that big of a deal. Because most of mine are the Western languages, I use Unicode-8 and that tends to work very well for most languages, except for double byte Asian characters. Unicode-8 is the most universal.

I hit Export, and then it's going to give me a Save As dialogue box. This is "my transcript", "my caption file". In terms of file type, I'm going to drop down this menu. I've got a couple of different options, EBU, I've got Adobe format, I've got a micro dv sub format. This is the one that I want, I want the SubRip, or SRT format. I want to select that and I'm just going to hit Save.

Let's see what an SRT file looks like. This is what it looks like inside of Aegisub. Let's see what the file looks like before you were to load it up into, for example, YouTube.

This is my file, and I'm just going to right-click here. I'm going to open it up in a text editor. Let me find my text editor, Text Edit, not very creative. It's going to open up the file.

What you see here is just like...let me load this up and you can see. Starting 0000 to 0656.56. Here we go, first entry, 00:00 to 00568. A little bit more accurate. I have music. There's my text. The next clip is from 8 to 14.355, 8 to .3455 let's go. Then, again, the same thing. The next clip, and so forth and so on. And this goes on, however long your video is.

This caption file is used by YouTube to then load in and caption this on screen properly. This file is also used in a bunch of other applications, which we'll get into later. That's how you create your transcript file, your fully synced transcript file. So that's how you create your fully synced transcript file.

03:53

In this video I'm going to show you how to use Aegisub to time shift your transcripts. Now the scenario here is that you; let's say that you have this great, this video that you've done before and you wanna tack on an introduction to it.

So you did a great video, and you had it transcribed, and you've got this transcript from the previous video and now you've tacked on a six second intro the video, and now you need to time shift the transcript before you burn your transcripts in. So what I have here is, and that's exactly what happened here by the way, I have this video, I added this little introduction and a little piece of music. That plays out so, here we have a little piece of music.

And what I have here is, when I click on this, it's two seconds twenty two milliseconds. It's supposed to be, "Let's start," and it's not, so this is not, it's not correct. I could literally go into the file and hand edit the whole thing or Aegisub has this great little feature that allows me to time shift and it saves a ton, a ton of time.

Now, this thing is supposed to be, "let's start" and it says it starts at two seconds but really it starts right around, "let's start" is right around here and we can play that.

[Video]: Let's set up.

Or let's set up. So it starts really at; for the sake of easy math let's say it start at eight seconds in. It's editing. So let's say it starts at eight seconds in. So I know it's going to start at eight seconds in, and what I want to do is I wanna time shift this whole thing. I want to start at eight seconds but I want the rest of it to be the same, so I want it to shift accordingly.

So I've got two seconds, 22, so what I need to do is I need to time shift the entire transcript by seven seconds and, my math's bad, 17. So I'm going to come here under Timing and I'm going to hit Time Shift. So I need to send this, seven seconds, 78. and if I add that together, 78 and 22 is zero. And that makes it eight plus two is ten. Sorry, eight. Boy my math is horrible. Eight minus 2.22, is 5.78. Whoa, wow, horrible math. Apparently I can't do math.

So this is a 5.78. So what I'm going to do here is I'm going to do time and I'm going to shift everything forward, all rows, starting an ending times by 5.78 seconds. So now when I click on this and I hit play it should go,

[Video]: Let's set up, the Nexus 10. Well, the first thing I'm going to do-

See, and now it's correct. The other thing I need to account for is I need to create a new line that has my music in it. So right here I'm going right click and insert before, that gives me a blank line. And I'm going to go all the way from zero to this little music part, and I'm going to type in the words, "music" bracket, "music" bracket. Bracket.

And now what I've done is I've not only time shifted my entire transcript to be proper, I have also added in the new line of the transcript to add the musical introduction. So this is a phenomenal tool, the little timing, time shift is a phenomenal tool in Aegisub.

02:55

So I'm going to show you how to use Aegisub to translate your synced transcript into another language. What I have here is, I have my video setup, and I've got my audio setup, and I've got my English transcript.

Aegisub, that's on the left-hand side. On the right-hand side I've got a Google translate window open. So, I'm going to pick, obviously for music there's nothing to translate, I mean, you could translate it into "musica," but I'll show you a line of text.

So, what I have here is, let's setup the Nexus 10 for the first time. So, under Subtitles, under Translate Assistant, because I have that line highlighted, that's the first line. Line two of six, I've got that, let me just select this. And here I can play the audio.

[Recording: Let's setup the Nexus 10.]

And I can play the video...

[Recording: Let's setup the Nexus 10.]

...and audio together to give you a sense, if I need that.

[Recording: So first what I'm going to do is, over here...]

I'm going to copy that. I'm going to copy and paste that into my English side. And then I'm just going to have this translated into Spanish, real quick. So I'm going to copy that, and I'm going to bring it back and paste it into the translation window.

Now, you'll see here are the keyboard shortcuts. In order for me to have this saved in the system, I need to press Enter. So, that's what I'm going to do right now, I'm going to hit Enter on the keyboard. And, let me show you what happened - it replaced my text file, my English, with Spanish.

Let me do that again...and copy...translate. Now, my Spanish is not that great, so I'm not entirely sure if this is correct and accurate. But, let's assume for a second that Google translate is doing this correct and accurately. Okay. So then it's going to the third one, all right. I just keep doing this until I go to the end.

And then when I go to Export, again, it's the same thing. I Export As. I pick Unicode or, if I was using a double byte Asian language, I might pick Spanish, well Spanish is not going to be one here but, if I were doing Asian; Japanese, or Korean, or Mandarin, or simplified Chinese. So, I can pick the actual format of the text file. Again, I stick with Unicode-8.

I go to Export, and here what I want to do is my caption file and then Spanish. And then pick Subrip and save. Now, I haven't translated the whole file but, if we were to go back and look at the file, you'd see all the lines that were translated would be translated in that file. And that's how you can translate your transcripts or your closed captioning into multiple languages.

01:36

In this video I'm going to show you how to edit an SRT file to make sure that the line of the text is not, when you open caption, burn off the screen, so the text just doesn't get, you know, goes off the screen in both edges.

So what I have here is, I have my SRT file, and I've loaded it into Text Edit, which is a text editor on a Mac, and you could do the exact same thing in notepad on the Windows platform.

But basically what I'm going to do here is, this is my first line of my SRT file, it starts here, it ends here, and this is the line. Now, my concern is that depending on how large the text is, and how small the video is, a portion of this line can actually burn off a side of the video image, in which case you wouldn't be able to read the line. So to insure that it will properly display, I take these single lines of text and break them into two lines.

So what I'll do here is about halfway through at the how, I will put my cursor and just add a character return, and I'll do the same thing basically for every line down the file, and I will try to break the line where it makes the most sense from a reading perspective, and I'll just keep going through this whole file until it's done, and then I'll save it back as an SRT file, and I'll use that edited file to open caption. If you were close captioning, then you don't need to worry about this at all.

Section 6: Captioning Software
00:31
The first tool that we're going to go through is a product called HandBrake. Again, it's open source. It's available on the Mac, the PC, and the Linux platform. It's a tool that will convert a lot of different file formats into either mp4, mpeg4, or mkv I believe is the other one, which is another container file, like mp4. I suggest that you keep things in mp4, if I haven't said that enough. It's available at HandBrake.fr. This next series of videos will show you how to Closed Caption in HandBrake.
02:41

So, before we get into setting up HandBreak for soft subtitling, I want a couple points. First of all, HandBreak is Mac, PC and Linux-based. It has a huge, huge following. And basically it's used to do a lot of conversions between different systems. So, it's a very simple interface. Here's your source, here's your destination and then your output settings.

So, let's pick a source. So, I'm going to come in here and pick my YouTube video here. There's my YouTube video. So, I picked an MPEG source. I then want to pick and output and we're going to say, HandBreak. Break, subtitle... And, while it says it'll hard subtitle, it actually does not hard subtitle. Well, I don't use it to hard subtitle, let me put it that way.

So, I'm going to pick my video codec. I tend to pick 264, H264, because it gives me a really high compression rate. It takes a little longer to do the processing, but it gives you a really high compression rate. The audio we don't need to worry about; the file already has audio. Subtitles, so I wanna now add my subtitles at this point, so, add external SRT file. I'm just going to go to my SRT file and this is the time-shifted one that we created earlier. And it's English and if you remember correctly, when we exported out of Aegisub, we used Unicode 8, so I'm going to pick Unicode 8 and I'm going to make this the default subtitle for this. And of course, I would change the language if the language is different.

And then, that's all I need to do. All I need to do is then start and this will start processing this file. And this is a pretty fast process, but I'll still stop it and then come back when it's done. HandBreak's now finished with the MP4 file and burning in a soft burn on our caption using our caption file. So, let's go take a look. I've loaded it already up in QuickTime Player, boy, lost my thought there, QuickTime Player. And I've made sure under view subtitles, that I actually turn on the subtitle and you can see here...

[Video]: The Nexus 10, The first thing I'm going to do is over here...

So, you can see that it is, now, this is soft burn because I have it turned on. If I turn off subtitling, then, you can see that it's actually turned off. So, that's how I use HandBreak to subtitle, soft subtitle MP4 files.

00:44

Open captioning on the Mac. There are two programs that I'm going to walk through and teach you how to use. Submerge and RoadMovie. Now, they are made by the same developer and they do the same things. The reason why I am showing you both programs is Submerge is less expensive and really designed for one file at a time. It allows you to preview your captioning one file at a time. RoadMovie is the bigger brother application and what it allows you to do is it allows you to batch process multiple movies at the same time. Also, both of them do conversion as well. I'm going to show you both of them in this series of videos as we go forward, and then you can pick and choose which one's right for you.

00:26
Submerge. Submerge is designed to do Open Captioning of one file at a time. You can see on the right-hand side, those are your options. Then, you can see you can preview it before you actually encode it. That's what's really nice about this. This is a Mac only application and the following video will show you how to create an Open Caption file.
02:48

In this video, I'm going to show you how to use Submerge, which is the little brother application to RoadMovie. And, what's really great about this application is that it allows you to embed your subtitles or burn them in on screen and it allows you to preview the whole process before you do your actual final encoding. So, what's nice about this is you can do quality control before you're done and then just check the file to make sure that it's not corrupt when you're done. So, this is the, Bitfield is the company, the program is Submerge and it's, yeah, it's nine dollars.

So, let me show you how it works. I have it loaded. What you need to do now, this is going to be one file at a time versus like, RoadMovie, the bigger brother application that allows you to batch process. So, I've got to open up my movie file. And in this particular case, I go to my MPEG4 and let's open up my MPEG4 and let's bring it in here. And, what I have here is on the left-hand side, I've got the movie. On the right-hand side, I've got all of my settings for the transcript.

So let me load up my SRT file. So, the first button is choose. I'm going to go to my time shifted SRT file and I'm going to open that up. And what it's going to do, it's going to render out this. And what I have here is relative size, so this is medium and I can make it larger if I want and then I can re-render. And I can come in here and make sure that none of this is fallen off screen. So that's pretty good. Let's see what extra large looks like and let me render that.

So that's really nice. It's really big and as I go through here, none of it's really being cut off. All you can see that I'm using an unlicensed version right there. I'm doing that for the sake of this video because I use the big brother application, but right there, I've got a really nice, big, uncropped, hard transcript. And then, all I have to do now if I like the way that this looks, none of it's cropped off anywhere, I like the way that it looks, great, then I can just burn it to a file. So, I'm going to just come under here and save it. We're going to call this merge. Merge, and I can set up. I can go to my set ups, change my video output and click save and it'll start burning this. I won't show you the whole process, but that's basically Submerge in a nutshell.

00:34
RoadMovie 2 is a batch-processing utility that allows you to closed caption a file or open caption the file, and it allows you to do it in batch processes. So you can load up a bunch of them, and then have them process overnight. Come back in and check them the next day. So this is a series of videos introducing you to RoadMovie and all the various preference setups to be able to encode, batch-encode either open captioned or close-captioned files.
01:18

I this video, we'll do an introduction to RoadMovie. RoadMovie is a Mac utility that allows you to batch process video and subtitles all at the same time. So, you can load up four or five of them and just let them run all night. This is the big brother application to Bitfield's Submerge app for subtitling as well. So, this has a lot of the same features. The primary difference is, and the reason why I use it, is it allows me to batch process. It's 29 dollars at the time of this video.

So this is the actual interface itself. What you see here is presets, destination, settings, metadata. So you can change a lot about your video and this is important because this is one of the last; subtitling is one of the last steps that you're gonna do to final distribution. And so this allows you to add a lot of different metadata and in another video, I'll go through the various settings. But, it also allows you to convert one video format into another. So, you can go from a lossless, high-bandwidth format into a very, very compressed, you know, internet-based format all at the same time while both hard captioning or soft captioning at the same time.

01:18

In this video I'm going to go through the preference settings for subtitling in RoadMovie. Under the RoadMovie menu, the About menu if you go to Preferences, and the second icon over, Subtitles, this gives you the basic preferences.

Because this is a batch oriented program where you can run a bunch of videos at the same time, then all of your subtitling preferences are going to be loaded in at once and they're going to be based on defaults. So, that also means if you're going to - if you want different preferences for different videos, then run them as different batches.

But generally speaking, we have a default language you can pick from the list of languages. In coding, this one guesses. You can also force it to - whichever one you pick... You know, in this series we've been doing Unicode

8. You can also pick the font, the size, bold, italicize and you can see here, it will... I selected italicize, it will give you an onscreen preview.

You can change the color in the background, alignment, center, right, left, and vertical offsets. So, basically, depending on your different sizes of video, one of the key things that you're going to... I mean, notice that there's not - we don't use video sizes or font sizes here. It's going to try to scale up and scale down based on the resolution of your video.

But you're going to need to experiment with these settings, so that's why I wanted to run through them now.

02:02

In this video, we're going to go through two preliminary settings; the presets and the destination, prior to doing our first subtitle project in RoadMovie.

So, in the upper, left-hand corner, I have the presets menu and it has a lot of them built in and a lot of these are really great settings. I generally, I've got a lot of hard drive space, so I generally do everything uncompressed all the way until the end and then compress it once. So, I have built two settings here. Lossless Subtitle Export Hard; meaning the subtitles are burnt in on the image. And then I have lossless subtitle export soft; and that is where they're not burnt in, but they're incorporated inside the file itself.

And then if you wanted to see my settings here, my video is very high, that's full 1080p, 24 frames per second. So, that's a fairly large video. And then, I also have a fairly high sound quality. And so, these are both lossless. And so, I don't do my compression until the very end. But again, I manage a lot of hard drive space and a lot of organization on drive to do that.

But, so, those are the two settings. You can set as many as you want and create as many as you want. It's just a simple you've got a lot of different options. You step just step through a wizard to set it up.

So, that's presets. The next one is destination. Because this is a batch process, you're going to want to... You can upload to YouTube directly as well and iTunes and save it in your movie folder. What I've done here is I've just created a generic folder and it says you need to edit this destination each time. So, every time before I do a batch project, I come in here and I will pick, so this time I'm gonna pick the Nexus 10 folder and I'll just pick that as my destination for this project and then that's it. So that's my export folder and my presets. Once you set those two up, then you can move on to actually batch processing.

04:37

In this video we're going to set up RoadMovie with our first batch process and actually go through and hard subtitle a video. So in order to do that, the first thing I want to do is; there's two windows here. I've got the "drag movies in here" window, and then this is my "status window" as it's processing, as the batches are processing. So I can come down here on the left-hand side and click on the plus button and I'm going to add my first movie. So this is the stock movie we've been using. When I get that, it's going to ask me for two things. One, is what video setting I want, and so in this particular case we're going to pick the "subtitle export hard", which will hard subtitle for me. And then, I want to save an upload and based on our previous video where we changed the destination; I'm just going to pick my export folder.

Metadata; I can come in here and if this were a full production and I'm about ready to produce this out, I'm going to change all this. I'm going to add a description, add directors, make sure that I'm putting good, clean, metadata in there. Because you never know what the different systems will pick up in terms of search engine optimization. So it's important that you do that.

Subtitles. So this is really what it's all about. I'm going to go ahead and add my subtitle. It has a couple of different formats, I tend to use SRT. And so this is our subtitle. I want to make sure this is the time shifted one, yep. It's in English and its Unicode 8, it's already figured that out. And what's nice about this is you can add more subtitles. So I could add the Spanish one if I wanted to. So lets assume for a second that this is the Spanish subtitle. So then I could come in here like this... Well we'll just pick one. Lets call this Georgian. And so then I can pick that. And notice over here, whatever one's highlighted is going to be the default one. So this has now flipped me over to French - I'm going to go back to English. So that's English and Georgian and I'm just going to get rid of this. But you can add multiple subtitles and embed those in.

Now, because we're hard subtitling it's going to pick whatever the default one. Whichever one is highlighted, is going to be the one that's hard subtitled. So you can only subtitle one at a time. The other thing that's also really nice about this is I can actually take a look at the subtitle file and make sure that... One of things you have to watch for is, if you have really long run on sentences, you're going to have to edit your SRT file because they'll literally go off screen. So you can see here that in each one of these SRT files - these are two line files where it's actually been; Where it's actually not going to run off screen. And that's something that you're going to want to, in your quality control, check for.

So this is the right SRT file, the right video file. I'm going to hit chapters - I'm not going to create any chapters or tracks. So now all I have to do is accept and it will start processing. And so I can add the next movie as this is processing. I can add the next one an

00:39
Open captioning on the Window's PC platform. When I say that there's a lot of different captioning tools on the PC platform, that is no joke. There are dozens and dozens and dozens of them. What is great about XVID for PSP is, it was designed to convert DVD files into and output them to mobile devices or to servers. So it's a high-volume batch processing utility. It also does a great job for open captioning. So, it's open-source, it's Windows-only and it is very fast and high quality.
04:56

In this video we're going to use XviD4PSP to hard caption a video file on the PC. Now what's nice about XviD4PSP is a couple things. One is that it's open source. It's a fairly large download, it's 80 mgs, but it has everything you're going to need. It will install everything it needs to be able to do conversion and it relies on a lot of open source programs. It was originally designed to take high quality DVDs and convert them down into much smaller file formats for portable devices. So there's lot and lots of options and you might find yourself using this to do a lot of other things as well.

Let me show you how to hard caption a video file. I'm going to open it. I want to show you two things here. First is, if we show you all the different file types. Look at that. That's crazy. All files. I want to show you two things before we load.

This is the file that I'm going to be loading, which is the first time video set up, YouTube. It's an MPEG-4 format. Also notice that there's a subtitle file, and this is an SRT file, that is the exact same name. So it doesn't have to be, and I'll show you to use different subtitle files, but it's going to do something very interesting. Because this is the exact same name, when I open the MPEG-4, which I'm doing right now, it will automatically pull in the subtitle file.

I'm going to show you in this video how to select one if it doesn't automatically pull it in. The key there is the way that it's automatically able to pull it in is that the subtitle SRT file is named exactly the same thing as the video file and it's in the same directory. So it's a little bit of magic, but don't worry, you're likely going to keep your transcripts in a separate directory anyway, and so I'll show you how to do that.

So it's processing the video. It's just saying 'Hey, what kind of video am I?' In a second here it will pull up and load the video. So it says 'What kind of audio does it have? What kind of video does it have?' And then, 'What do I want to do with it?'

Here is my video and you can see it has already pulled in the subtitle file and what's great about this is I can go to a different frame and see the subtitle, so I can preview and make sure that my subtitles aren't going off the screen. There aren't any options to change the subtitles size. If the subtitle file didn't get pulled in, go under Subtitles and click Add and it will open up a window and it will ask you for your subtitle file.

Now that it has opened up the window I'm going to pick my subtitle file and that is the YouTube one right here, and it will click and it will re-render the subtitles so that I can preview them. So we're going to wait for that to happen. Okay. So there it's re-rendered. And I can go through and see how these are going to look. I can quality control it.

Now the next step in the process is to choose your output settings. My input setting was MPEG-4, I'm going to leave my output setting as MPEG-4 as well. Might as well. It's a good, high compression with good quality. And then you can see here, when you drop down this menu, you can c

00:09
So one of the things I wanted to do was show you how to upload a transcript to YouTube and make it the default transcript within YouTube. So this video shows you how to do that.
03:40

In this video, I'm going to show you how to upload your translated caption files into YouTube. So, I'm in my, I'm in the YouTube video manager right now and these are the files that I have uploaded already, so this is a first time setup.

There are five main tabs: information, enhancements, audio, annotation, and this is the one we care about the most, which is caption. Caption. So, I have two of them. I've got the automatic caption and I have the English language, that was already, that I uploaded already. And I'm going to upload the Spanish one I just translated. So, here is the add new track, and I'm going to upload, I can request a translation or I can upload a caption file or transcript and that's exactly what I'm going to do. I'm going to upload my, the file I just converted into Spanish.

So there are my files. So this is my caption file in Spanish. And I'll click upload. And then, it's going to ask me what language, and so I'm going to pick Spanish, Mexican Spanish. And it's going to ask me what optional track name and I'm just going to put Spanish. Now, you don't have to do this. You could just leave it default and it'll just add, it'll just put Spanish, but I'm just showing you for the sake of this video.

There you go. And then I can click sync and it will sync up the transcript. So that's what it's going to do now. And once it's done, you can then play it in Spanish. So, that's how you upload a translated transcript to YouTube.

00:20
In this video, I want to show you how to take a second transcript. Let's say you translated your caption file into another language and you want to load that second transcript into YouTube instead of burning it using HandBrake. Then this is how you do that within YouTube.
01:44

In this video, I'm going to show you how to upload your translated caption files into YouTube. So, I'm in my, I'm in the YouTube video manager right now and these are the files that I have uploaded already, so this is a first time setup.

There are five main tabs: information, enhancements, audio, annotation, and this is the one we care about the most, which is caption. Caption. So, I have two of them. I've got the automatic caption and I have the English language, that was already, that I uploaded already. And I'm going to upload the Spanish one I just translated. So, here is the add new track, and I'm going to upload, I can request a translation or I can upload a caption file or transcript and that's exactly what I'm going to do. I'm going to upload my, the file I just converted into Spanish.

So there are my files. So this is my caption file in Spanish. And I'll click upload. And then, it's going to ask me what language, and so I'm going to pick Spanish, Mexican Spanish. And it's going to ask me what optional track name and I'm just going to put Spanish. Now, you don't have to do this. You could just leave it default and it'll just add, it'll just put Spanish, but I'm just showing you for the sake of this video.

There you go. And then I can click sync and it will sync up the transcript. So that's what it's going to do now. And once it's done, you can then play it in Spanish. So, that's how you upload a translated transcript to YouTube.

Section 7: Viewing Captioned Video
00:14
Viewing Caption Files. So this next section, we're going to go through, and I'm gong to show you how to turn on closed captioning so that you can actually view the caption file in various viewers you're going to come across.
00:26
In QuickTime, the default is to have your subtitles, they don't call them captions, they call them subtitles, and the default is to have it turned off. So, when you've loaded your video that has subtitles in them or caption files, closed captioned video, you want to go under File, Subtitles, and then pick the language. If you've got multiple caption files underneath there, pick the language that's right for you. Then you can see the subtitle on the screen.
00:31

For Windows Media Player, like I said, this is open caption, so it's burnt right on the image itself. But, if you do have a closed caption file that will play within Windows Media Player, you right click on Windows Media Player anywhere on screen, pick Sub-titles and lyrics and then turn on, or on if available, and then that's how you turn on closed captioning.

Or, if it's open caption, which is what I recommend for Windows Media Player, then it's always on.

00:46

The next one is VLC. VLC not only recognizes a caption file within an MPEG 4 stream, it will also allow you to load a second caption file independent of whatever's inside the file and pick that one. Load up your video file and under video, under subtitle track, you can see there you can either open an entirely different one that's independent of the video file or, turn on whatever one's inside the video file.

So VLC has lots of different options and again it's an open source player across multiple platforms. If you have max PC's and like devices in your life VLC may be a common thing you might want to standardize on in terms of media play.

00:25
On the iPad. When you have your video playing, you will notice in the play control, the transport control, that you will see a small caption icon. You can see it there on the left. I have it in a red circle. Tap on that. By default, your caption is turned off, and so in this particular case, I have only one caption file and it's in English. So, I tapped on it and you can see underneath that it will display the captions.
00:37
Android. So this is a screenshot from an android device, actually from my cellphone that I was able to, my cellphone runs 4.1.2 of the Samsung Media Player, and it automatically grabbed the file, it automatically displayed the captions without me having to do anything. However, I was not able to take the same file and play it on Google Nexus 10, which uses Android 4.3. In general, if you're targeting the Android device to get maximum distribution with minimum effort, I would recommend open-captioning your video files.
00:33
YouTube. When the video file is captioned in the lower right hand corner, you see all of your various icons, like making it larger, increase the resolution, turning on and off annotations that you... one, two, three, four, five, six buttons from the right, is the close caption symbol. When you click on that, that's when you get this close caption menu, and you can see there is automatic captions, that was the one that was captioned automatically, and then when I uploaded my own SRT file, or my own transcript file, then it gives me the English caption. So I selected that one, and then that's what you see onscreen.
00:32
Now what's interesting is, I want to show you in this next slide, is this is the translation feature. So I selected my English tract, and then under translate captions, I selected that, and it gave me the next drop down, and I was able to select a language, and so I picked Arabic because I just wanted to see what it looked like in Arabic, and it automatically, on the fly, translates all of my captions to Arabic. Again, Google Translate isn't perfect, but in terms of mass distribution in multiple languages, you just can't beat that.
Section 8: Extras
00:42
Audacity is a open-source Windows, Mac, Linux-based audio editing application. I use Audacity to take up, if I'm going to upload my transcript and outsource it, I don't want to send them the whole video. I take the video, I load it into Audacity, and save it out as a very highly compressed mpeg 3 audio file, and that can take a video file that's five or six hundred megs, and bring it down to two or three megs, and so when I upload it to the transcription service, I don't have to wait for 500 megs to upload. So the next video you're going to see how to use Audacity to extract audio out of video and save it into an mpeg 3 file.
01:44

In this video I'm going to show you how to use the open source audio editing program, Audacity to strip audio from a video file. Now you may ask yourself why you want to do this. Audio files are much smaller than video files and so when you are using an outside service to do your transcription its often easier to just upload an MP3 file to them versus a full video.

So I use the Audacity program to strip the audio file out. So under File, Open, I'm just going to go find my video file. So you can see my video file here. It's the audio export. So this is my video file, and once its processing its going to grab the audio out of the file, and you can see here, I'm going to play the file. As I am using this program there is a couple [audio skips]. So this is a transport play and stop. That's my audio file and it is correct and now I want to export it as an MP3.

So you just, File, Export. I'm going to go under Settings, Transcript, make sure I select MP3. You have a lot of other options. I use MP3 because its very compressed and I don't really screw with the quality settings. It just doesn't matter for the transcription as long as you hear it. It's going to ask me to add to add metadata. Again I don't do this because for transcription service and then its going to export my file.

So this is fairly quick, it's only a three minute long; almost four minute long file. So we'll just let it wait. So its going to take ten, 15 seconds to export. Once export is complete I can then upload the file to the transcription service.

So that's how to use Audacity to export an audio file out of a video file.

03:05

In this video I'm going to show you how to use Audacity to convert video files into MP3 files using Audacity's batch feature process. So there are going to be a lot of times when you want to upload a number of transcripts to a transcription service, but you don't want to upload the video. You just want to upload the audio, and so this is how I do it.

This process is going to rely on Audacity's batch feature. So, it's called chains on Audacity, so File. The first thing we have to do is we have to create a batch export of MP3. So in order to do that you go to File, Edit Chains, and in here you'll see there's an MP3 conversion already. But if you look at the conversion process, it normalizes, which is basically auto-sets levels on the audio before it exports to MP3.

For my videos, I don't like to do that. You may want to experiment. This may be really great for your videos, but for the way that I record audio I don't want my audio normalized when I export it. So, what I'm going to do here is I'm going to create a new batch process and we're going to call this "Clean MP3 Export."

I'm going to click Okay and I'm now looking at the commands and there just isn't anything in here. So what I want to do is I want to insert the Export MP3 command. So I'm going to select Export MP3 and click Okay. Whoops. Insert, there we go. I had to get it up here. So, double-click on the Export MP3 and then click Okay.

So, now all I want to do is I want to export it and then that's the end of the command. So I'm going to click Okay. Now, the next thing I want to do is I want to load up my video files and process them automatically to MP3.

So, under File, I'm going to use the Apply Chain command. You can see here, if I already had a project open I could apply it to the open project. Or in this particular case, I want to select Files. So I want to apply the Clean MP3, so I'm selecting that and then I'm going to apply the files. Then we'll come out here to my exports.

Whoops. I'm going to go to the right export file and this is my set of files. So I'm just going to grab that one and I'm going to go all the way to the end and I'm going to click Okay. What you're going to see here is it's actually loading each video one at a time and processing it, and then loading the next one and processing it.

So, I won't make you wait through this whole thing. I'll just show you what I did in a previous processing job. So I'm going to open up My Finder, go to My Files and, when I previously exported this, it created a folder called "Cleaned". In that folder it created all my MP3 files, one after the other, ready to be uploaded to the transcript service.

00:27
The next set of extra videos is how to use ScreenFlow. What ScreenFlow is, it's a Mac-only screen capture utility, and it's a very, very common way of creating screencasts. It has in it the ability to transcribe, sync and open-caption your video files and save them out, all in one application. So for anyone who's using ScreenFlow, the following videos will show you how to do that.
03:55

In this video I'm going to show you how to actually create your transcription and do your timing in a Mac program called ScreenFlow, which is a pretty often used screen capture utility.

So what I've got here is I've got my file and I've loaded up my video on my file and brought it onto the timeline, so that's what you see here. So in ScreenFlow you've got the audio and the video all merged into one. So that I can see it better, what I'm going to do is I'm going to detach the audio. And I'm just going to do a little quick flip between these file formats so that I can see the audio.

And so I've got audio on top and video on the bottom. I'm going to zoom in a little bit, use the transport controls to zoom in. And if I hit play you'll hear the music. All right, so I've now got a way of seeing that. I am going to increase the audio size just so I can see that a little bit better when I do my transcript.

So let's start transcribing and syncing this document. So the way that ScreenFlow does this is, under Views, you're going to open up another component of the timeline, and that is the caption track. So I'm going to click on Views, Show Caption Track, and what you'll see here is a new track. And I can choose to make these, these are three seconds long by default and that works pretty well.

So this is my first track here and what I have here I've got two different transports, this controls, this transport control controls the timeline at the bottom, this transport control controls the caption track. So I'm just going to hit play, and hit stop here, and so I know that this one's music. So I'm going to type in music.

Again I'm going to use the conventions, since this is an action not a narrator, I'm going to use the conventions of bracket, all caps, and then close bracket, and that gives me my music. This second one is also music, I can leave that blank, or I can choose to type in music again, so that it carries across that track, music, or I could just decide to make that whole track. So here I've done it twice, when I go to export this could be a little funky. So instead of doing this I'm going to delete that, and I'm going to make this track here six seconds long. And so now it's six seconds long. Now when I come here, when I hit play-

[Video]: Let's set up-

Let's, set up. Next one.

[Video]: The Nexus 10. The Nexus 10.

The, Nexus, 10. Next one.

[Video]: The first thing I am going to do is over here-

The, first, thing I am going to do.

And you would continue on until your file is done. Notice that there are three second increments and a timing on this is slightly, I can't drag and drop the timelines, I have to change everything numerically, so it's a little bit awkward in terms of that. It'll show up on screen here at 12 seconds even though it doesn't start until 13 seconds, so not exactly timed as accurately as other processes, but it's a really great all in one solution.

01:50

In this video I'm going to show you how to export an actual video file with the soft caption burnt into the image itself. So, what we have here is my file, I've captioned it inside of ScreenFlow and now I'm ready to export.

To export I just hit File, Export. Pretty straightforward. I need to pick where I want the file to go and this is ScreenFlow Export with Caption, and I'm going to do web, low resolution. For the sake of this export I'm going to reduce it down to 50 percent.

And then notice here that I have Add Caption Tracks. So, I want to select that and what it'll do is add this caption track into the video itself. Now it'll soft caption it, meaning it will embed the caption track into the video. Then your player's going to have to have the ability to play that embedded text file.

So you can see here, and I'll pause during this, but you'll see here, that it's now exporting.

All right now that the video has exported let's go take a look at the video. So there's my ScreenFlow capture. I'm going to double tap on video. And you don't see the captions here, it's only when I go into View, Subtitles; see the caption's turned off by default? I'm going to turn on the English captions. English captions, and [music]. So there it is. Let's set up...

[Video]: Let's set up the Nexus 10. The first thing I'm going to do is...

And there's my captions.

01:00

In this video I'm going to show you how to export an SRT file out of ScreenFlow once you have transcribed and synced the file.

So I've done that already, here's my music. Let's set up the Nexus 10, and I'm now ready to export my SRT file. Under edit, you're going to find the caption menu, and I set my current language already. The default is English, and it happens to be in English, but if you had a different language you'd want to set your default to that language. You know if you were translating, or captioning in a different language you'd want to set it to that language. And then the next step is to export the file. So I hit the export to SRT button, I go to the directory that I want, and I'm going to call this "my English transcript caption file". Again, it's going to ask me the language, and that's what it was set to and I'm going to hit save. Then it just exports the SRT so that I can use it on a different application.

Section 9: Resources & Closing
Article
This is a list of software and links.
Article
Here is Tips, Tricks and Troubleshooting
00:21
So, I can't thank you enough for joining me in this class. If you have any questions or have any comments, please do not hesitate to use the tools within Udemy to be able to do that, and I will answer them as soon as I possibly can. Thank you very much for joining me in this class.
00:20
If you have any comments or want to reach me directly, please do not hesitate. Onscreen and in the show notes will be all of my contact information. Either use the messaging system in Udemy or you can use any of these ways to contact me. I would love to hear from you and how you're using it or answer any questions you might have.


Website: www.samirahman.com

Twitter: twitter.com/sami_rahman

Google+: plus.google.com/u/0/100831031341036113725/

LinkedIN: linkedin.com/in/samirahman1

pinterest: pinterest.com/samirahman

Youtube: youtube.com/user/SNApps4Kids

Youtube: youtube.com/ipads4specialneedsbo

Youtube: youtube.com/android4specialneeds

Article
Please consider rating this course.  I have gotten some really great feedback and would love to know how you feel.

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Sami Rahman, Technology Geek & Disability Advocate

Sami Rahman is the CEO of SmartEdTech. SmartEdTech develops software for children with disability to learn and grow. Mr. Rahman has certification in an Assistive Technology Applications Program offered by California State University and Mobile Devices for Children with Disabilities from TCEA. Mr. Rahman is the author of Getting Started: iPads for Special Needs. The book is available in print with a full version online for free.

Ready to start learning?
Start Learning Now