Skip to content

03




文本

████    重点词汇
████    难点词汇
████    生僻词
████    词组 & 惯用语

[学习本文需要基础词汇量:6,000 ]
[本次分析采用基础词汇量:6,000 ]

All right. Hi everyone. Okay, I guess we're live.

Ah, so as - as - Aarti was saying,

please enter your SUNetID.

Ah, we can bring this up again at the end of class today.

We'll just take another like, what 20 seconds?

And then we'll - we'll go on to the main discussion.

[NOISE] All right.

So, um, what I want to discuss with you today is,

um, ah, what I'm going to call full cycle deep learning applications, right?

Um, [NOISE] and so,

um, I think this Sunday, uh,

you'll be submitting your proposals for the,

ah, class projects you do this quarter.

And, um, in most of the, uh,

in - in a lot of what you learn about the machine learning projects,

you learn how to build machine learning models.

Um, what I want to do today is share with you

the bigger context of how a machine learning model,

you know, how a neural network you might train, uh,

fits in the context of a bigger project.

Uh, so what are all the steps, right?

Just as if you're writing a software product,

you know, you take another classes,

then you, uh, what happened?

That, uh, they teach you how to build a website for example.

What is that? Um, but

to build a product requires more than just building a website, right?

So, what are the - what are the other things you need to do to

actually do a successful software project?

And in this case, to do a successful machine learning application.

Um, and so, uh,

let's see. So - so, yeah.

Test, test, is the audio on?

Test. Could you turn up the audio?

Yeah, how is this?

No? Can't here me now.

Hey.

Oh, I think I'm broadcasting.

I hear myself great.

[LAUGHTER] Okay, you can hear me now.

Great. Thank you. All right. Thank you. All right.

So, what I want to do is share with you,

um, full cycle machine learning.

Not just how to, uh,

you learn a lot about how to build deep learning models

but how does that fit in a bigger project, right?

Just as if you're taking the class on building a website,

you know, then great, you know how the code of a website, that's really valuable.

But what are all the things you need to do to make a successful website?

Or to build a - build a project that

involves launching a website or mobile app or whatever.

Um, so as - as you plan for your,

um, class project proposals, uh,

due this Sunday, uh,

if you're doing an application project that fits in the context of a bigger application,

um, also keep all these steps in mind, right?

So, um, you know,

these are what I think of as the steps of an ML project,

or really maybe - maybe not fast project but

maybe a serious machine learning application, right?

And I think, oh, no I built a lot of machine learning products over several years.

So, some of these are also things that I wish I had known,

you know, many years ago.

Um, one, does this kind of - maybe kind of obvious but,

you know, select a problem.

And let's say for the sake of simplicity data,

you use supervised learning, right.

It - it turns out for the CS 230 class projects I think,

uh, more than 50 percent of the class projects tend to use supervised learning.

There are also other projects that use - end up

using GANs which we'll talk about later this quarter or other things.

But I think, you know,

let's say you use supervised learning to - to build interesting application.

Um, and - and I think for today,

I'm going to use as a running example of building a,

um, building a - a voice-activated device, right?

So, you know, uh, no, actually,

how - how many of you have like a smart speaker in your home?

Like a voice-activated device in your home?

You know, the - in - in the US?

Well, not that many of you, interesting. Okay cool.

Yeah, so, I think, uh, you know the - the Amazon Echos, Google Homes,

the Apple Siris or the - the - in - in China,

my - one of former team's built Baidu DeurOS.

Ah, ah but let's say for the sake of

argument that you want to build a voice-activated device.

And I'm going to use as a running example.

Um, and so in order to build a voice-activated device,

and - and again I'm not going to use any of

the commercial brands like Alexa or OK Google or Hey Siri

or I guess in China it was a Hello [FOREIGN] which means kind of roughly hello little du.

Um, but let's use a more neutral word which as I see you

wanted to build a device that your responsive word activate.

Um, and you're actually going to implement this as a problem set later this quarter.

Um, but so you want to build a yeah.

Is it possible to [inaudible].

Okay. No. Volume up.

Uh, uh, let's see how that - okay is this better?

No? Yes? No? This is better?

Okay, cool. Thank you. Look at how ironic,

talk about speech recognition and the volume isn't high enough.

Okay. um, so let's say you want - well,

[LAUGHTER] let me know about comes off again and thank you.

Okay. Um, so let's say you want to build a voice-activated device.

So, the key components,

the key machine learning,

deep learning component is going to be a learning algorithm

that takes as input an audio clip and,

uh, outputs, um, did it detect what's sometimes called the trigger word.

Did I go soft again? Okay, this is okay, great.

All right and - and O plus Y,

you know zero one,

they did detect their trigger word such as Alexa,

or OK Google or Hey Siri,

or Hello [inaudible] or, um,

or activate or whatever wake word or trigger word, right?

Um, and so step one is,

uh, select a problem.

Um, and then, in order to train a learning algorithm,

you need to get labeled data if you're applying supervised learning.

And then you design a model,

use back prop or some of the other algorithms you learned about momentum,

Adam, so you know, this optimization algorithms gradient descend,

to train the model.

Um, and then maybe you test it on your test set.

And then you deploy it meaning you start selling these smart speakers and,

you know, putting them into hopefully into your users homes.

Um, and then you have to maintain the system.

I'll talk about this later as well.

Uh, and - and this is not chronological but, ah,

one thing that's often done,

but I - I want to talk about it at the end instead is not really step eight.

It's a QA which is, uh,

quality assurance which is an ongoing process, right?

And so, um, one, uh, let's see.

So, as you - so if you want to build a product,

if you want to sell a machine learning product,

these are maybe some of the key steps you need to work on.

Um, some observations, when you train a model,

training a model is often a very iterative process.

So, every time we train the machine learning model,

you'll find that, you know,

I can almost guarantee whatever you do,

it will not work.

At least not the first time, right?

And so you'll find that even though I've written these, uh, sequence of steps,

when you train a model, you undergo,

no - that neural network architecture didn't work.

and need to increase the number of hidden units or change the regularization

or switch to RNN or switch to a totally different architecture.

And sometimes you train a model and go nope, that didn't work.

Um, I need to get more data, right?

And so this is often a very iterative process we are cycling through,

um, uh, the several different steps here.

Um, and then I think, ah,

one distinction that you have not yet learned about in the Coursera - in

the deeplearning.ai Coursera videos is how to

split up the data into train, dev, and tests.

So, I am going to simplify those details for now.

But just as a - a foreshadowing I guess of what,

um, you learn later in the - in the, uh,

deeplearning.ai Coursera videos is how to take a data set,

you have trained into - excuse me,

into a training set, um,

ah, into a set that you actually test cross-validate

using during development called the dev set or development set

or [inaudible] cross validation set.

That was a separate test set.

So, you'll learn about this later.

But I'm just simplifying a little bit, um, for today.

Okay. So, um, so I think,

um, the first thing I want to do is ask you a question, right?

So, we're going to talk through many of these steps.

And when it turns out that, um,

what a lot of machine learning classes do and - and do a good job teaching

is focusing on maybe these three steps,

or maybe these four steps, right?

And what I want to do today is spend more time,

so this is the heart of machine learning,

how do you build a great model.

Uh, and what I want to do today is spend more time talking about step one,

uh, and six and seven,

and then just a little bit of time talking about

the core of this because you kind of need to do

the other steps as well when you want build

a deep learning product or build a machine learning application.

Okay. Um, so let's talk about discussion question.

Um, I'm actually curious.

Uh, if you're selecting a, um,

project to work on, uh,

uh, what are the - actually so I - don't - don't answer this yet.

I'll - I'll tell you what the question I'm going to ask is,

um, which is - [NOISE]

All right. Uh, what properties make for a good candidate deep learning project?

But don't answer yet, right?

So I, I wanna say a few more things before,

before I invite you to answer,

which is that, um,

all of you, for the last few days,

I hope, have been thinking about what project you wanna do for this class.

And what I wanna do is just discuss some properties of what are

good projects to work on and what are maybe not good projects to work on, okay?

And, and think of this as your chance to give your classmates advice, right?

What are the things your classmates should think about if they can't

decide this is a good project to work on, okay?

Um, and so, what I wanna do for today is, ah,

use, um, this voice-activated thing as a mo - as a,

as a motivating example.

And, you know, th - there's actually one project I, uh,

uh, was working on,

uh, the, the, actually,

I thought, there were, actually, there's one project I thought of

working on but decided not to work on,

ah, and that, that, that's a voice-activated device.

So, it turns out that [NOISE] , um,

these voice-activated devices like Echo, Google Homes, and so on,

they are taking off quite rapidly in the US and around the world.

Um, it turns out that one of the, you know,

significant pain points of these devices is the need to,

um, configure it, right?

To set it up for Wi-Fi.

So, um, I've done a lot of work on speech recognition, you know, ah,

a hotel, did a lot of work on group speech system,

I led the Baidu speech system.

So I've been published papers on speech recognition,

and I have a, I have one of these devices in my home, right?

Um, well, a - actually, Amazon, I have an Amazon Echo in my living room.

Um, but even to this day,

I have configured exactly one light bulb to be hooked up,

to be controlled by my Echo,

[LAUGHTER] because the, the seller process,

not blaming any country,

it's just difficult to hook up, you know,

a Wi-Fi enabled light bulb and then to

set it up so that your s - smart speaker or whatever,

as in, say, you know, "Smart device, turn off the lamp."

So, I, I have one light bulb in my living room where,

that I can [LAUGHTER] turn on and off and that's it, right?

[LAUGHTER] Ah, even as a speech researcher.

So, [LAUGHTER] um, maybe that's even a bad example.

Um, so one, one,

one application that I think, ah,

that the, I, I was actually searching because you're working on is to build a, um,

embedded device that you can sell to lamp makers,

so that, I don't know where you buy your lamps from, but, you know,

I have a few lamps from IKEA,

or a few lamps from wherever.

But you can buy a desk lamp,

[NOISE] so that when you buy the desk lamp,

there's already a built-in microphone,

so that without needing to connect this thing to Wi-Fi, you know, and say,

"Hey here's a $20 desk lamp, um,

put them on your desk," and you can go home and say,

"Desk lamp, turn on,

or desk lamp, turn off."

Uh, then, I think that will help a lot more

users get voice-activated devices into their home.

And it's actually not clear to me,

if you want to turn on a desk lamp,

it's actually not clear to me that you want to turn to a smart speaker and say,

"Hey, Smart Speaker, please turn on that lamp over there," right?

I - it, it, maybe it feels more natural to just talk

directly to a desk lamp and tell it to turn on and turn off.

Um, and so, ah, a - also for what it's worth,

someone we're friends now with evaluated this,

we actually thought that this could be a r - reasonable business,

to build embedded devices,

to sell to lamp makers or other device makers so that they can sell

their own voice-activated devices without needing this complicated Wi-Fi setup process.

Um, and so to do this,

you would need to build a learning algorithm,

and have it run on an embedded device,

then it just inputs an audio clip and outputs,

you know, whenever it detects the, the, the wake word.

And instead of a wake word being "Activate," the wake word would be a "Lamp,

turn on" or "Lamp, turn off."

You need two wake words or trigger words,

one to turn it on, one to turn it off, right?

Oh, and, and, and, and I think just the other thing that,

um, I think would make this work,

ah, is, um, ah, ah, to, ah,

to give these devices names.

So, if you have five lamps, or two lamps, you,

you need an wait index into these different desk lamps.

So, um, let's say you decide for your project,

you know, to have a little switch here,

so this lamp could be called John,

[NOISE] or Mary, or Bob,

or Alice, like a four-way switch.

So that's depending on where you set this four-way switch, you can say,

you know, "John," right?

"Turn on," right?

O - or d - if, if you decide to call this lamp John,

I guess you could give some other names so you don't have

any lamp by the same name, okay?

Um, so, what I'm gonna do is use as a, ah,

motivating example on this as a possible project.

Oh, and, and I'm not working on this.

If any of you want to build a startup doing this,

go for it, I, I, this is not net, [NOISE] well,

I, I felt my teams and I, we had better ideas,

so we wanted to do other things in this

but I actually don't see anything wrong with this.

I think this actually could be a reasonable thing

to pursue as well, but I'm not doing it.

So you're all very welcome to, if you want, okay?

[NOISE] Um, so now,

the question I want to pose to you [NOISE] is, ah,

when you're brainstorming project ideas, you know, like, this idea,

or some other idea, um,

what are the things you would want to watch out for?

Wha - what are the properties that you would want to be true

in order for you to feel good proposing this as a,

as a CS 230 Project, right?

So why don't you take a minute and, and write this down.

I think, ah, uh, uh, yeah.

Well, wha - wha - what if, if you're asking a friend if, if a friend is asking you,

"What are the things I should look at to see if something is [NOISE] a good project?"

What would you, what would you recommend to them?

So, fe - few, just write down a few key words,

and then we'll see what people say.

And then, and then, I'll tell you what I tend to

look out for when I'm selecting projects.

And I have a list of, ah, five points.

Might take like, I don't know,

like, two minutes to -

That's not activated.

Yeah.

Oh. Sorry. This is not activated?

Um, uh, you're not able to answer this up, enter answers?

Okay. [NOISE] Let me test the Internet access.

Just, [NOISE] just checking it.

Yeah. I'm connected to the Internet.

Uh, [NOISE] Aarti, any ideas?

[NOISE]

It was just activated.

Oh, I see. Okay. All right.

Let me try that. [NOISE] I'll just.

Turn it on. [OVERLAPPING]

Oh, it's working now? Okay. Thank you. [OVERLAPPING]

Maybe I'll turn off to do it,

but it keeps getting turning off.

Okay. Yes. Thank you.

[NOISE]

Yeah.

Thanks.

[NOISE] So, if you take,

like, two minutes to enter, and I think,

I think I can figure this and let you enter multiple answers.

Let me just take two minutes.

[NOISE]

All right, another one minute.

[NOISE]

All right, 30 seconds.

Okay, three, two, one.

Well, maybe in the hindsight that wasn't the best visualization.

Can people see this?

[NOISE]

Um nevermind, trying to see if - all right so,

data novelty, loss of data,

some of these are very small, human doable,

number of examples, do one,

two ones [inaudible] algorithm,

new industry of fields, uh,

clear objective, practical useful, huh,

oh, finishing time, uh, [LAUGHTER].

Host real life problem,

useful hasn't been done [inaudible] tractable.

Yeah. Generalization [inaudible] Um,

let me make some comments on these.

I think I, this is,

this is pretty good, um.

I have a list of five bullet points and maybe I just share

of you my list of five, uh, um,

[NOISE] which is I'm,

just some things to encourage you to pay attention to, um, you know.

This, this may or may not be the best criteria,

but interests I think, well,

interests plus, uh, I just hopefully you

work on something that you're actually interested in, um.

And then I think, uh, uh, right,

data availability which many of you cited is a good criteria or,

uh, wanted the ways that,

um, Stanford class projects sometimes do not go well,

is its students spend a month trying to collect data and after month

have not yet found the data and then - and then,

you know, and then there's uh,

and then there's a lot of wasted time.

Um, one thing that I would encourage

you to consider as well is,

uh, domain knowledge, um.

And I think that if you are a biologist and have

unique knowledge into some aspect of

biology through which you want to apply Machine learning,

that will actually let you do a very interesting project, right.

That - that is is actually difficult for others to do.

Um, and I think, uh,

more generally as, as a advice for navigating your careers, right.

So, you know, i - its interesting because you know,

Machine learning, Deep learning, there's so much,

there's so many people wanting to jump into Machine learning and Deep learning,

um, actually I'll give you an example.

So, I sometimes talk to, uh, uh, doctors,

Radiology students, uh, uh, including, you know,

like Stanford and other universities

Radiology students that want to learn about Machine learning, right?

Because they hear about, you know, Deep learning,

maybe someday affecting radiologist jobs,

and so they want to be part of Deep learning.

And so my career advice to them is usually

to not forget everything they learned as a doctor and try to,

you know, do Machine learning 101 from scratch and just

forget everything they learned as a doctor and just become a CS major.

I think that - that path can work,

but I think where radiologists could do the most unique work,

uh, that allows them to make a most unique contribution,

is that they use their domain knowledge of

healthcare radiology and do something in Machine learning applied to radiology,

right, uh, and so,

was [LAUGHTER] all right.

[LAUGHTER] How, how many - how many millennials are there in this class?

[LAUGHTER] What is that me,

me, me, me, me thing? [LAUGHTER] All right.

All right. This is really wrong.

Yeah. [LAUGHTER] I - I think it's because this is a word cloud.

So they count word frequency,

right? The money thing.

I don't know, I have very mixed feelings about that [LAUGHTER]. All right.

Um, but I think as,

[inaudible] I actually know that some of you are taking, you know,

Deep learning because you work in a different discipline and

you want to do something in this hot, new,

exciting thing of Machine learning and I think,

uh, whatever this major you're in,

if your domain knowledge about some other area,

you know, Education, Civil Engineering, Biology, Law.

Um, taking Deep learning allows you to do

very unique work applying Machine learning to your domain,

right, um, [NOISE] uh. Let's see.

Um, I think that, uh,

uh, um, [NOISE] um,

I think, well, I call the utility but several of you mentioned as well,

something that has a positive impact that actually helps other people,

uh, uh, uh, and,

and I - I - I don't know money could be an aspect of utility,

but maybe not, the most inspiring one uh,

and then I think, um,

[NOISE] and I think one of the biggest challenges we face in the industry today is still,

frankly, is actually good judgment on feasibility, um.

So today, I still see too many, uh, leaders,

sometimes CEOs of large companies that stand

on stage and announce to the whole world, you know,

we're gonna do this Machine learning project,

to do this by this deadline and then 20 minutes later,

I talk to their engineers and the engineers say nope, no way,

not happening what the [LAUGHTER] CEO just finally says [LAUGHTER]

whole engineering [inaudible] is not doing that and knows that it's impossible.

So, I think one of the biggest challenges is actually feasibility um,

in fact, I actually know that's a - a - a you know,

I was chatting with Aarti about, uh, the, the,

um, uh TA office hours and I know that, uh,

uh there been lot of,

you know, a lot of you have been, uh,

thinking about applying, uh,

end-to-end Deep learning, right?

You know. Can you input any x and output any y and

do that accurately and sometimes it's possible and sometimes it's not,

and it still takes, uh, relatively deep,

deep judgment about what neural networks can and cannot do with a certain amount of

data that you may or may not be able to

acquire in order to do some of these things, right?

Um, so - so, I think throughout this quarter,

you gain much deeper judgment as well on on what is feasible, I guess.

[NOISE] This is pretty interesting.

I once know, uh, uh, uh I - I - I knew a CEO of a ve - very,

of a large company that once told his team,

um, uh, he actually gave this team these instructions.

He said, uh, I wish they assume that AI can do anything,

uh, and - and - and uh, uh,

I think tha - tha - that had an interesting effect, I guess.

Uh, uh, uh, yeah.

Cool. All right.

So, I think step one, um,

was select a project,

I hope this is [inaudible] projects,

try to keep some of those things in mind, um.

Step two is get data, right, [NOISE] um.

And so, uh, what I want you to do,

uh, and I'm going to pose the second question

and then have some you discuss this.

Let's say that you're actually working on this, you know,

smarts voice-activated embedded devices thing, right?

So let's say that you and your friends wanna build a start up so

train a deep learning algorithm to detect, you know,

phrases like John turned on or Mary turn off or Bob turn off or whatever, uh,

to sell to device makers so that they

can have low voice [NOISE] embedded voice detection chip.

It doesn't require a complicated Wi-Fi setup process, right?

So let's see one of - let's see how she wanna do this.

So, you need to collect some data in order

to start training a learning algorithm, all right?

So, uh, the second question I would pose to you is, uh, uh,

uh, two - a question in two parts uh,

but, but, uh, have you answer it all,

all at the same time which is,

uh, in how many, how many days,

let's say you actually proposed this for

your CS 230 project this Sunday and then you start work on it,

you know, like on Monday.

Are you guys start work on it today before the proposal?

But, uh, how many days would you spend collecting data?

Uh, and how would you collect data?

Okay? And I think, um, I'm actually how,

how many of you have participated in Engineering Scrum?

If you know what that means?

Okay a few of you uh, those who have an industry.

Okay, all right so engineering estimation,

when you estimate how long a project takes one of the common practices is

use a Fibonacci sequence to estimate how long a project will take, right?

And so Fibonacci sequence one, one, two,

three, five, eight,13 and so on.

And this roughly powers of two - but doesn't grow as fast as

powers to Fibonacci numbers are cool, right.

For so - uh, uh, so,

so what I want you to do is just finish after configuration right.

When our, um, speech bubbles, okay.

Yeah, that's good. All right.

So what I'd like you to do is in the text answer,

uh, I really write two things,

one is write a number,

how many days do you think you spent on collecting data?

You and your teammates if you're actually doing this project.

Uh, and then how,

how would you go about collecting the data?

Okay? So once you take good, another two minutes,

uh, to write in an answer.

[NOISE] Oh I'm sorry.

This heavy load, now still not activated.

So then try to hit, uh, interesting.

That is not helpful.

All right, that is definitely not helpful.

Um. All right, let's do this,

write down your answer on a piece of paper first and take two minutes [inaudible 29:10].

So the two questions are,

are how many days?

Uh, pick a number from a Fibonacci sequence and,

uh, are you going to use it nothing?

Oh, okay, uh, let's swap out my computer for Aarti's.

Oh actually, yeah oh,

if Aarti's computers working, actually go ahead.

Sorry. Okay I can just present.

Yeah, yeah let's, let's plug in your laptop? Shall we?

So you just use your laptop. [NOISE] See, sure.

Yeah. Yeah. Doesn't say I wonder if there's a network problem or web browser problem.

Uh, I started using uh,

Firefox recently in addition to Chrome and Safari and that was Firefox,

I've tried with other web browsers later.

Right, cool. Okay, okay, thank you.

Thanks all to you. [NOISE].

All right. I can maybe,

yeah maybe people then take another minute from now,

just extend the time a bit to end.

All right. Now the 10 seconds.

All right, cool. Let's see,

uh, show people's answers, okay?

Right. Well, three hundred sixty-five

So, there's a, there's a,

there's a lot of variance in the answers, right?

Uh, [LAUGHTER] Download from online depends on what data you want.

It turns out well.

So if you're trying to find data or phrases like John turn on that,

that, that data doesn't exist online.

Uh, it turns out that we're trying to find audio clips at the web activate.

There are some websites with, uh,

single words pronounce but those - but uh,

not a lot of audio clips session so

the trigger world where the wake word is the word activate.

Uh, there are some websites we can download like,

maybe 10 audio clips of a few people saying activate but it's quite

hard to find hundreds of examples of different people saying the word activate.

Um.

Five days it falls from the sky.

[LAUGHTER] All right, so,

let me suggest, um,

uh - let me suggest that you guys discuss with each other in small groups,

uh, what you think would be the best strategy?

How many days we find collecting the data and how we decide collecting data?

Try convince people next to you on that.

Uh, and, and before I ask you to start discussing,

I wanna leave you with one thought which is, um,

how long do you think will take you to train your first model?

Right. And so if it take you a day to train your first model or two days?

Uh, do you want to spend x time collecting data and then spend let's say,

you know, I'll know to [inaudible] the deep learning thing,

train a model, it might take a couple of days, right?

Especially if you've download open source packages, so,

so the amount of time needed to collect data as

x followed by two days to train your first model,

what do you think x should be the amount of time?

Once you go spend like two minutes to discuss with each

other and see if you can, can the, the,

the answers are - there is

very large variance right once you guys discuss, if you actually, if,

if the people sitting next to you are your project partners,

why should you discuss with them how,

how many days you think you should spend collecting data and how you collect the data?

Okay? Why don't you take two minutes to discuss a little.

[NOISE]

All right.

[NOISE] All right guys.

So, [NOISE] wow, all right guys.

[LAUGHTER] Hey guys.

So, [NOISE] all right.

A lot of exciting discussion.

So, actually how, how many of you,

how many of the groups wound up on the, on the low end?

How many of you, you know,

convinced each other that maybe it should be,

like, three days or less?

Oh, just a few of you. How come?

Some, some - someone, someone say why?

Why is it, why, why?

Because you wanted us to see if the algorithm works first.

So, we need some direction to just test to see if the algorithm is even reaching

some sort of good benchmark before we then go and collect the next data set.

Cool, yeah, right. I guess a little bit of data to test how

the algorithm works before you even go and collect the next data set, all right?

Cool. And did anyone had a,

had a high-end, like a 13 days or more?Yes.

Very few. How come?

Anyone, actually anyone, anyone with

insights you want to share with the whole class if you,

what, what were you all discussing so excitedly?

[LAUGHTER]. Yeah, go ahead.

So, um, depending on domain knowledge, uh,

maybe a connection can take a long time especially for this problem,

like, based on that one idea which we discussed in previous class, we were thinking like,

we could use like, movie clips and like,

some typos to like, generate sound like,

[inaudible] And that will take like time to like

mine data, um, [inaudible] [NOISE],

Yeah, yeah, yeah, right.

Yeah. So, there are accompanying systems to look at subtitle,

uh, uh, videos, right?

Uh, uh, like, uh,

YouTube videos with captions or something and,

and if there's, uh, appropriately,

creative commons data there you can use.

Yeah. So, let me,

let me tell you my bias.

I, I - I'll just tell you what I would do if I was working on this project.

Well, as, well, one caveat,

I haven't done so much work in speech recognition previously, right?

This is my first project.

Um, I would probably spend 1-2 days collecting data [NOISE],

kind of, on the short end, right?

And I think that, you know, one of the,

and, and, and one of the reasons is that Machine Learning,

kind of that circle I drew up there is actually a very iterative process where,

um, until you try it, you,

you almost never know what's actually going to be hard about the problem, right?

And so, um, so if I was doing this project,

I'll just tell you honestly what I would do.

Like, again, I've actually thought about this project a bunch, right?

Including, you know, trying to validate market acceptance and so on.

But, um, but which is that, um,

I would get a cheap microphone, a user or,

or user built-in laptop microphone or buy a microphone off, you know,

buy a microphone off Amazon or something and go around say,

go around Stanford campus or go to your friends and have them just say, "Hey,

do you mind saying into this microphone the word activate or John,

turn on or whatever," and collect a bunch of data that way.

Um, and then, uh,

uh, and with one or two days, um,

you should be able to collect at least hundreds of examples, uh,

and that might be enough of the data set to start

training a rudimentary learning algorithm to get going.

Because if you have not yet worked on this problem before,

it turns out to be very difficult to know what's going to be hard about the problem.

So, is what's gonna be hard, um,

highly accented speakers, right?

Uh, or is what's gonna be hard background noise, um,

or is what's gonna be hard, you know,

confusing turn on with turn off?

You hear John turn and then [NOISE] But when you build a new Machine Learning system,

it's very difficult to know what's hard and what's easy about the problem, uh,

or or is what's gonna be difficult that far-field, which is,

um, the technical term for if the microphone is very far away, right?

So, it turns out that, you know,

if, if we turn on the, um,

microphone on my laptop now for example,

um, the, the laptop,

which is what, like three meters away from me, uh,

will be hearing voice directly from my mouth as well as voice bouncing off the walls.

[NOISE] So, there's a lot of reverberation in this room

and so that makes speech recognition harder.

You, me are so good at processing out reverberant sounds,

reverberations that you almost don't notice it,

but it may, i - it actually the,

but the learning algorithm will have,

sometimes has problems with reverberations, right?

Or echos bouncing off the hard walls of this room.

Um, and so depending on what your learning algorithm has trouble with,

um, you will then want to go back to collect

very different types of data or explore very different types of algorithms.

Well, the problem is that sometimes is because just the volume is just too soft,

in which case, you know, maybe you need to do

something else and normalize all your volumes,

or buy a more sensitive microphone or something.

So, it turns out that when building most Machine Learning applications,

unless you've experienced working on it.

So, I - I've actually worked on this problem before,

so I have a sense of what's hard and what's easy.

But when you work on a new project for the first time,

it's very difficult to know what's hard and what's easy.

And so my advice to most teams is, um,

rather than [NOISE] spending say

20 days to collect data and then two days to collect model,

to train a model,

and it's often by training a model and then seeing what are the examples it gets wrong,

when does the algorithm fail?

That, that, that's your feedback to either collect

more data or redesign the model, right?

Or try something else.

Um, and if you can shrink the data collection period

down to be more comparable to how long you end up taking to train your model,

then you can start iterating much more rapidly on,

on actually improving your model, right?

Uh, and uh, so maybe one rule of

thumb I actually tend to recommend for most class projects is I know,

if i - maybe if you need to spend a week,

up to a week to collect data, you know,

maybe that's okay, but if you can get it going even more quickly.

Uh, uh, I would even maybe more strongly recommend that, um,

and there have been so few examples in

my life where the first time I trained a learning algorithm it worked, right?

I - it like, pretty much never happens.

Yeah, I, I, I, i - it happened once about a year ago and I was so surprised,

I still remember that one time.

[LAUGHTER] Uh, and so what, so,

so Machine Learning development is often

a very iterative process and by quickly collecting data set.

And, and often data sets are collected through sweat and hard work, right?

And so, I, I would literally,

you know, and actually,

well, I've actually done a long long speech.

And to get it going quickly,

I would probably just, um, uh,

have myself or my team members run around and find

people and ask them to speak into a microphone and record audio clips that way.

Um, and then only when you validate that you need

a bigger data set would you go to a more complicated things,

like set up an Amazon Mechanical Turk thing, right?

To crowdsource, which I've also done actually, right?

I also had very large data sets collected off Amazon Mechanical Turk,

but only in a later stage of the project where you

understand what you really need, right?

Um, so as you,

as you start working on your class projects,

maybe, maybe keep that, keep that in mind.

So, now, um, [NOISE] so,

one other tip that,

uh, Machine Learning researchers,

on average, we tend to be terrible at this,

um, um, but I'll give this advice anyway,

is when you're going through this process,

getting data, design a model, uh,

a literature search would be very helpful, you know,

so see what other, see what algorithms others are using for this problem.

It turns out the literature is actually quite immature.

There isn't a convergence of,

uh, like a world standard set,

sta - standard algorithms for trigger word detection in literature right now,

which people are still making up algorithms.

So, if you, if you do a little survey,

you find that to be the case,

but you're training initial model.

Um, and in most Machine Learning applications,

you go through this process multiple times.

So, one tip that I would recommend you do is, uh,

keep clear notes, um, on the experiments you've run.

Right. Because they'll often be as you train the model,

you see oh this model works great on American accent of speakers,

but not on British accent of speakers.

Right. I - I - I was born in the UK,

sometimes I use British accents for example.

If you are from a different part of the world,

you think of different global accents but since I'm from the UK,

I'm - I'm just gonna pick on British accents I guess.

Keep clear notes on the experiments run because what happens in

every machine learning project is after awhile you

have trained 30 models and then you and your team members are going,

"Oh yeah, we tried that idea two weeks ago did it work and if you have

clear notes from when you actually did that work two weeks ago,

then you can refer back rather than have to rerun the experiments."

Um, the other thing that some groups do is, um,

have a spreadsheet that keeps track of what's the learning rate you use?

What's the number of hidden units?

What's this? What's this? What's this?

Or, or, or, or cheaper than a,

than a text document so that which will make it easier to refer back to,

to know someone's you tried earlier.

Um, this is one piece of commonly-given advice.

This is one of those things that

every machine learning person knows we should do this but on average,

we're very bad at doing this but,

but, but, but, but you could I don't know.

But at the times, I've managed to keep good notes so that you save them

all the time right to try to remember what exactly you tried two weeks ago.

Okay. So, a lot of this class will be on this process of how to get data,

develop the train data test the design the model, train the model.

Eventually, test them all then iterate.

Okay. So, a lot this classes on this.

So, I wanna jump ahead. So, when you have a good enough model and you want to deploy it.

Okay. So, step six,

I guess is deployment.

Now, um, um, this is er, uh, um,

one of the reasons I want to step through

this example going through a concrete example is

I find that when you're learning about machine

learning for the first time is often seeing,

you know, what - what my team says called war stories

kinda of stories or projects that, that,

that others have built before that often provides the best learning experience.

I think like I have built speech recognition systems.

It took me like a year or two years for me to do it.

So, I'm trying to so rather than, you know,

having you spend two years of your life building

speech systems and can summarize a war story.

Right. To tell you what the process is like,

I'm hoping that these concrete examples of what building these systems are like in,

you know, large corporations that can help you accelerate

your learnings without needing to get two years of on-the-job experience.

You can just secure the salient points.

Okay. Now, if you're deploying a system like this,

one of the things, um, and this is actually true.

This is actually a real phenomenon for deploying

speech systems is you have the audio clip,

you have a neural network,

and then you know this will output zero,

one and the neural networks that work well,

will tend to be relatively large,

right relatively large model.

Larger than hidden units relatively high complexity.

And, um, if you have some of the smart speakers in your home, um,

you recognize that a lot of them are Edge devices as

opposed to purely cloud computation, right?

So we all know what the cloud is.

Um, uh, and what an Edge device is.

And Edge device is a smart speaker that's in your home or the cell phone in your wallet.

So, Edge devices are you know the things that are close to

the data as opposed to the cloud which is

a giant service we have in our data centers, right?

So, um, because of network latency,

er, er, and, and,

and because of privacy a lot of these computations are

done on Edge devices like a smart speaker in your home or,

er, er, like er, I guess Hey Siri or Okay Google can wake up your cell phone, right?

And so, Edge devices have much lower computational budgets and much lower power budgets,

limited battery life, much less powerful processes

than we have in our cloud data centers.

And so, it turns out that se - se

serving up a very large neural network is quite difficult.

Right? It's very difficult for,

you know, a low-power,

inexpensive microprocessor sitting on a smart speaking in your living room

to run a very large neural network with a lot of hidden units and a lot parameters.

And so, what is often done is to actually do this.

Which is to input an audio clip and then have a much simpler algorithm.

Uh, figure out if, you know,

anyone is even talking, right?

Because so the smart speaker, you know,

in my living room hears silence most of the day, right?

Because usually, just no one at home, right?

There's no no voice.

And then only if it hears, you know,

someone talking then feed it to the big neural network that you've trained and ramp-up,

use a larger power budget in order to classify 01.

Okay. Um, this component goes by many different names, um,

in - in reasonably standard terminology but not totally standard terminology.

The literature I'm gonna call this VAD.

For - Voice Activity Detection.

[NOISE] Right.

Um, it turns out that voice activity detection,

there's a standard component is in

many different speech recognition systems if you are using a cellphone for example,

VAD is a component that tries to figure out if anyone's even talking because if they think

no one is talking then there's no need to encode the audio and try to transmit the audio,

right? Aarti, could you?

Okay.

Yeah. [LAUGHTER] Yeah. Um, and so,

uh, so the next question I want to ask you and,

and I - I - I thought this is timely because, um, uh,

well there's a couple options, right?

Option one is to build a non-machine learning-based VAD system,

Voice Activity Detection system which is just, you know,

see if the volume

of the audio your smart speaker is recording is greater than epsilon.

So, the silence just forget it.

And option two is train a small neural network, um,

to recognize on, on,

on human speech, right?

And so, uh, my next question to you is,

um, if you're working on this project,

would you pick option one or would you pick option two?

Right? As you, as you,

as you move toward - oh sorry and I think,

um, a small neural network.

So, a small neural network or in some cases,

I've seen people use a small support vector machine

as well for those who don't know what that is.

A small model can be run with a low computational budget.

There's a much simpler problem to detect if someone is talking than to

recognize a word they said so you can actually do this, you know,

but this with reasonable accuracy with a small neural network but if you actually work on

this project for CS 230 which would you try first?

So, could we come to the next question?

Yes.

Okay.

I cant figure out how to do that.

Oh oh sure.

Okay.

Yeah, yeah, yeah, you can let them

start answering I guess and then while you figure out the projection.

Cool.

I will just keep unlocking it periodically.

Are people able to vote?

No, there are no votes yet.

Well, I guess you write so much code,

you have a shortcut to go so you're coding environment on your laptop. Great. [LAUGHTER].

Yeah. That way?

All right cool. Great thank you.

[NOISE]

Oh wow! Great. Right. You're answering quickly.

Another like 20 seconds if that's enough time to give the answers.

[NOISE]

All right cool.

Um, that's fascinating.

There's a lot of disagreement in this class.

Um, people must say why,

why would you choose option one,

why would you option two?

And then I - I - I - I - I have a very strong point of view on what I would do, right?

Um, but, but I'm curious why,

uh, why option one and why option two. Go ahead.

Option one is easy to debug in the environment.

Easy to debug. Anything else? Either option one or option two?

Option one is simple.

Option one is simple. Anything else?

Option two is not an environment.

Option two is not the environment. [LAUGHTER].

I was thinking the option one, right?

If you've a dog in your house and he barks and it can activate.

So, when you have option two,

you can probably kind of already like simplify the problem but it does not,

is another way to I mean,

activates the machine but doesn't by itself.

[NOISE] That's why if it's talking to.

Yeah, if the dog barking.

Option two, it would be much better.

Yeah, yeah, someth - something kinda goes noise.

Cool. All right. And just two of you and in each -.

Option two, they're making noise from people like whispering?

Like, what if someone is whispering?

Huh?

What if someone is whispering?

What if someone's whispering?

[inaudible].

Yeah, yeah, go. Yeah, go ahead, have you.

[inaudible]

I think it will pick up the background noise.

Yeah, cool, yeah. And in a noisy place like, you know,

I have a friend who's, uh,

lives actually listening to the train station.

So, right, so.

option one will cover a lot of things [NOISE],

which is possible. Yeah.

Whichever option you take has to be running constantly.

So, you want something that is very cheap,

so it seems like option one is better than [inaudible].

Yeah, whether we pick has to be constan - constantly somehow to be cheap,

low power, low conflict.

So, le - let me show you some of the pros and cons.

Um, uh, so, um,

uh, I think, yeah,

there are pros and cons in option one and option two was why there's so,

so, so many votes were both options.

I perceived we'll choose option one.

Um, but, but let me just,

let's just discuss the pros and cons, right?

I think that's, um, uh,

option one, um, forces just a few lines of codes.

Is, is yes, maybe options two isn't that complicated,

but option one is even simpler.

And I think that, um, uh,

actually maybe I would say if I hadn't worked on

this problem before I will choose option one.

Uh, but since I have experience in

speech recognition eventually I know you need option two.

But that's because I, I because I've worked on this problem before.

But if it's your first time working on the speech activation problem,

I would encourage you on average to try

to really simple quick and direct solutions and go ahead and,

um, so let's see how long would it take to implement this.

Right? I would say like 10 minutes,

five minutes, I don't know, right?

How long would it take to implement that?

i don't know. Like what?

Four hours? One day?

I, I, I don't really know actually.

Right. Um, le - let me just write one day,

I'm, I'm not quite sure.

Right, but if, um,

option one can be implemented in 10 minutes then I would encourage you to do that.

And go ahead and put the smart speaker in your home or in your potential users homes.

And only when you find out that the dog barking

is a problem or the train on the railway station or whatever is a problem,

then go back and invest more in fixing it, right?

And, in fact, um,

It's true that maybe it's annoying if the dog barking keeps on waking up the system.

But maybe that's okay because if the large neural network then screens out

all the dog barking then the overall performance systems actually just fine.

And, uh, and, and,

and you now have a much simpler system.

Right? But, uh, but, but it,

it turns out that, um, the reason, uh,

you might need to go to option two eventually is

because there are some homes in noisy environments.

Uh, uh, you know, there's constant background noise,

and so that will keep the large neural network running a little bit too frequently,

so, so if you have a large engineering budget,

you know, so, so, some of the smart speaker teams

are over hundreds of engineers that are working on it.

So, if you have hundreds of engineers work on it,

totally option two will perform better.

But if your strap a startup team,

a scrappy startup team with few of you working on a class project, you know,

the evidence that you need that level of complexity is not that high,

and I would really do that first and,

and you start to gather evidence that you really should make the investment to build

more complex system before actually making the investments of days or,

e - er, and eventually I think this is one day to build your first prototype,

right, and then eventually you'll be,

you'll be more complicated.

Um, it turns out that, um,

the other reason [NOISE] ,

um, the other huge advantage of the simple method is the following.

Um, and this is one of the frankly,

this is one of the, this is actually one of the big problems and

big weaknesses of Machine Learning algorithms in deep learning algorithms,

which is what happens is, uh,

um, when you build a system and you ship,

ship a product, the data will change, right,

and so, um, I'm gonna simplify the example a little bit.

But, you know, I, I know Stanford is very cosmopolitan,

this Palo Alto is very cosmopolitan.

See, you collect data in this region,

you get accents from people all over the world,

right, because, because that's Stanford or that's Palo Alto.

But, but just to simplify the example a little bit, [NOISE] um,

let's say that you train [NOISE] on, uh, US accents,

right, uh, but you know,

for some reason, uh,

uh, when you ship a product,

maybe it sells really well in the UK and you start getting data

[NOISE] with a UK or with British accents.

Right? So, one of the biggest problems

you face in practical deployment of Machine Learning systems is that,

the data you train on is not gonna be the data you need to perform well on.

Um, and, and I'm gonna share with you some practical ideas for how to solve this,

but this is one of those practical realities

and practical weaknesses of Machine Learning,

That is actually not talked about much in academia, uh,

uh because it turns out that the data sets we have in academia are

not set up well for researchers to study and publish papers on this.

I think we can start new Machine Learning benchmarks in the future.

But there's one of those problems that is actually kind

of under appreciated in academic literature.

Uh, but that [NOISE] is a problem facing many,

many practical deployments of Machine Learning algorithms.

Um, and, uh, and so,

more generally, the problem is one of data changing, right, and, uh,

you might have new classes of users with new accents or you might train a lot on,

uh, the, maybe you get data from

even Stanford users and maybe Stanford is not too noisy or Stanford has a certain,

you know, types of characteristics.

When you ship it to another city or another country that's much more noisy,

uh, you know, different background noise.

[NOISE] Right? Or, uh,

you start manufacturing the smart speaker and to lower the costs of the speaker,

they swap it out,

they swap out the high-end microphone that you use from your laptop to collect the data,

for a lower-end microphone, [NOISE] right?

It's very common thing done in,

you know, well, done in manufacturing, right?

If you could use a cheaper microphone why not, right, and,

and often to human ears,

The sound sounds just fine on a cheaper microphone,

but if you train your learning algorithm using your,

you know, I guess yeah, well,

I use a Mac, but a Mac has a pretty decent microphones.

If you train the data using all you collect from a Mac

and then eventually has a different microphone,

it may not generalize well.

So, one of the challenges of, um,

Machine Learning is that you often develop a system on one data set,

and then when you ship a product,

something about the world changes, uh, and, and,

and your system needs to perform on

a very different type [NOISE] of data than what you had trained on.

Um, and so, [NOISE], Um, and so,

what will happen is after you deploy the model, uh,

the world may change and you often end up going back to get more data,

redesign the model right and,

and, and I guess, er, and this is,

this is, uh, uh, the maintenance of the Machine Learning model.

Uh, I wanna give some other examples.

A web search, right,

uh, this happens all the time,

at multiple web search engine which is, uh,

you train a neural network or you train a system to,

to give relevant web search results.

But then something about the world changes.

You know, for example there's a major political event,

some new person is elected president of some foreign country

or there's a major scandal or just the Internet changes,

right, or, or there's a, a, a, actually what,

what happens in China is that new words getting invented all the time.

Uh, uh, in, in, in Chin - the Chin - uh,

says that, by of what they Google and Baidu,

but the Chinese language is more fluid than the English language,

and so new worst get invented all the time.

And so, the language changes,

and so whether we are trained just isn't working as long as it used to, right.

Um, or, or, or maybe a different company,

suddenly shuts off, you know,

their entire website to your search index because they don't want you, uh,

indexing their website and so it's like

the Internet changes and what have you done doesn't work anymore.

Um, or, um, uh, self-driving.

Okay. [NOISE]. Uh, it turns out that you build a self-driving car in California,

and then you try to deploy these vehicles in Texas, um, you know,

it turns out traffic lights and Texas look

very different than traffic lights in, in, in California.

So, a model trained on California, Texas on,

so a neural network trained to recognize, uh,

California traffic lights actually doesn't work very well on Texas traffic lights.

Right? Uh, I'm trying to remember which way around to this,

but I think California and Texas have

a different distribution of horizontal versus vertical traffic lights for example,

there's actually, as humans don't notice as you go,

oh yeah, red, yellow, green,

but the learning algorithm doesn't actually generalize

that well if you go to a different location.

You've go to a foreign country, again, traffic lights,

signage, the lane markings all change, um.

Or I guess, well,

one example I was working on earlier this week, right?

Uh, manufacturing, right, landing AI,

working on inspection of parts in factories, um,

and so if you are doing visual inspection [NOISE] in the factory,

and the factory starts making a new component,

you know, they were making this model of cell phone,

but cell phones turn over quickly, and so,

but in a few months later they're making a different type of cell phone or something

weird happens in the manufacturing process or

the lighting changes and new type of defect.

So the world changes, right?

And, um, so, oh, she got locked again?

[NOISE] What I'd like to do is, um,

actually revisit the previous question in light of this,

uh, the world changes, phenomenon, right?

[NOISE] Which is, let's say you've collected a lot of

data with American accented speakers.

Um, and then, you know, we'll ship the product to the UK.

And then, um,

and then for some reason you find that you have all these British accent speakers,

right, trying to use your smart speaker.

So, between these two algorithms,

the non machine-learning approach,

which is a set the threshold versus trained in neural network.

Which system do you think will be more robust for VAD, voice activity detection?

[NOISE]

All right, let's take like another,

I don't know, 40 seconds?

[NOISE]

All right. Yeah. Interesting. Do people want to comment?

Well, more, more, more people voted for a non-ML.

Does someone want to explain why. Yeah. Go ahead.

[inaudible]

Yeah.

You're right. So we say,

for the VAD voice-activated section, uh,

if you just measure the volume,

then it doesn't really depend on,

on the absolute, right?

So, non ML might be more robust.

Anyone else? [NOISE]. All right.

So, again I'm going to show you about.

So, it turns out that, um,

if you train a small neural network,

uh, to, um, uh, you know,

American accented speech, uh,

there's a bigger chance,

that your neural network,

because it's so clever, right?

They'll learn to recognize American speech,

[NOISE] and have a harder time generalizing to British accented speech. Do I make sense?

And so, one of the things that I've seen a lot of teams,

uh, where-where's, so, so,

one way that non ML thing could fail to generalize,

would be that British speakers are systematically,

you know, louder or softer than American speakers, right?

So, you know, I know. I don't know if

Americans stereo typically are louder or less loud than British,

but, but, you know, but if, but if American and British speakers one,

one country just has louder voices as softer voices,

there maybe, the threshold you set won't generalize well,

but that seems unlikely.

I, I don't see that being realistically.

But, um, but if you train your neural network a lot, parameters, then, uh,

it's more likely that the neural network will pick up

on some idiosyncrasy of American accents,

to decide the [NOISE] salaries even speaking, um,

and thus maybe less robust in generalizing into British accented speech, right?

And, another way to think about this,

is if you imagine to take it even further example,

imagine that, um, you're using VAD for

a totally different language than, than English, right?

Where, um, take a different language, you know,

Chinese or Hindi or Spanish or something where the sounds are really different.

If you train a VAD system to detect, you know, English,

it may not allow work for detecting Spanish or Chinese or French or,

or some other language.

Um. And so, do you think of British accents as uh,

as somewhere on the spectrum.

Not a foreign language by any means,

but just more different,

than, I think the,

um, non ML system is, is,

is more likely to be robust, right?

And so, one of the lessons that too many, that,

that a lot of machine learning teams learn the hard way is uh,

um, if you don't need to use a learning algorithm for something,

if you can hand code a simple rule, like,

if volume greater than 0.01,

do this or that, uh, those rules are can more robust.

Um, and, the, one of the reasons we

use learning algorithms is when we can't hand code something, like,

I don't know how to hand code something to detect a cat,

or detect the car on the road,

or detect the person.

So, use learning algorithms with those.

But if there is such an encoded rule,

that actually does pretty well,

you find that it is more robust to shift into data and will often generalize better.

And uh, if any of you take a, uh, uh,

we talk a bit this,

about this little bit in CS 229,

I think CS 229 talks about this as well,

but this particular observation is backed up by very rigorous learning theory.

And the learning theory,

is basically that the fewer parameters you have, uh,

if you still do well on your training set.

If you can have model with very few parameters,

that does well on your training set,

you would generalize better, right.

So, there's this very rigorous machine learning theory that basically says that,

and in the case of the non machine-learning approach there's

maybe one parameter which is what's the threshold for epsilon.

And that's worked well enough for your training set,

then you are severe generalizing even when the data changes is,

um, much higher, right?

Um.

Now, the last question,

um, I want to pose for discussion today is, um,

when when discussing deployments, uh, oh, and,

and so, one of the lessons of deployment is,

that's just the way the world works.

Yeah. Pretty machinery system deployed,

the world will usually change and you often end up collecting

data and having integrated and maybe improve the model, right?

And you have fixed a model for British speakers around you.

Um. So, we talked about edge deployments,

as well as cloud deployments, right?

And so, um, ignoring issues of user privacy and latency,

which are super important,

but for purposes of question let's, let's,

let's put aside issues of user privacy and network latency.

Um, if you need to maintain the model,

so it means there's means of updating the model, right?

Even as the world changes.

[NOISE]. Um, oh sorry I miss, I miss typed.

It should read, does a cloud or edge deployment make maintenance easier, not of, right?

Uh, once you, once you just enter a one word answer, and, and why?

[NOISE] And so, maintenance is when the world changes, something changes.

So you need to update the learning model to take

advant - take care of this British accent.

So, which type of deployment makes it easier?

Let's take like, yeah, another two minutes to enter your answers.

[NOISE]

All right. Another like 50 seconds.

All right. Cool. Now, see what people wrote.

[NOISE] Wow.

Cool. Great. All right.

Almost everyone is saying Clouds.

Most people are saying cloud.

Almost. All right.

Cool. Great. And, and just to summarize,

I think there are actually two reasons why.

Most people said it's easier to push updates, that's part of it.

I think the other part of it,

is that if all the data that is at the edge.

If all the data is processed, you know,

in the user's home and another constant cloud,

then even if you have all of these unhappy British accents it uses,

you may not even find out, right?

You're sitting in the company headquarters,

you have all these users and mysterious things, you know,

seem to be not using your device maybe because they're unsatisfied with it,

but if the data isn't coming into your servers in the cloud,

then you might not even find anything about it.

Now, there's serious issues about user privacy,

uh, uh, as well as security, right?

So, so, so, please if you haven't bought the product,

please be respectful of that, uh,

uh and then take, take,

take care of that in a very thoughtful and respectful way of users.

But um, [NOISE] if, uh,

first if, um, so this is the cloud, right?

If you have a lot of edge devices and all the data is processed there.

Um, you won't even know what your users are doing.

And they're happy or unhappy. You just don't know.

But, if some of the data in a stream to your servers in the cloud,

and if the user privacy with

really fiscious good user consent tells you for what you're doing with the data,

but if you take care of that in a, in a, in a, in a,

in a reasonable and sound way,

if you're able to examine some of the data,

then you can at least figure out that gee looks like analyzing the data,

there are these people at this axon,

there's background noise that, um,

uh is uh, giving it bad product experience and you can also maybe have the data,

so you can gather the data from the edge,

to feed back to your model, right?

So, so it let's you detect that something's gone wrong,

um, unless you have the data to retrain the model,

so solve that British accent problem,

we can retrain the model for more British accented speech,

and then finally let's you push the model back up.

All right. So, [NOISE] first, it let's you detect what's going on, two,

it gives you data for training,

and three, unless you more easily push the model backup.

Uh, push, push, push the new model to production, to deployment, okay?

Um, and this is also why, uh,

even if your computation needs to run on the edge.

If um, you could in a way

respectful of user privacy and it's transparent at the how you use data,

if you can get even a small sample of data,

or have a few volunteer users send you some data back to the cloud,

that will greatly increase your ability to detect there's something's gone wrong,

as well as maybe give you some data to retrain the model.

Uh, so even if you can only do,

so that push updates, right, this, this will,

this will help greatly,

okay? Um. All right.

So, finally, one last comment I think uh, one, one,

one last challenge is uh,

in a lot of machine learning systems,

you're not done at deployment.

There's a constant ongoing maintenance process.

And I think, one of the processes, uh, you know,

AI teams are getting better and as well, setting up QA,

to make sure that we update the model,

you don't break something.

So I think, QA and large companies,

quality assurance processes is kind of testers.

And I think the way you test machine [NOISE] learning algorithms is

different in the way you test traditional software,

because the performance of machine learning algorithms is

often measured in a statistical way, right?

So it doesn't work and it doesn't work.

It, it, it neither works nor doesn't work.

Instead it works, you know, 95% of the time or something, and so,

lot of companies are evolving the QA processes to have

this type of statistical testing to make sure that even if you change the model,

you do a push update,

it still works, you know,

95 or 99% of the time or something rather than so,

so, so putting in place new QA test processes as well.

Okay? All right.

Hope that was helpful stepping through what

the full arc of a machine learning project will look like.

Uh,will, will later this quarter or in course three as well as in later,

lectures are present, [NOISE] we'll keep talking about

machine learning strategy and how they make decisions things.

So, let's break for today.


知识点

重点词汇
embedded [ɪm'bedɪd] v. 嵌入(embed的过去式和过去分词形式) adj. 嵌入式的;植入的;内含的 { :6007}

non [nɒn] adv. 非,不 n. 投反对票的人,反对票 n. (Non)人名;(柬)嫩;(俄)诺恩 { :6044}

appropriately [ə'prəʊprɪətlɪ] adv. 适当地;合适地;相称地 { :6089}

detection [dɪˈtekʃn] n. 侦查,探测;发觉,发现;察觉 {cet4 cet6 gre :6133}

stereo [ˈsteriəʊ] n. 立体声;立体声系统;铅版;立体照片 adj. 立体的;立体声的;立体感觉的 {cet6 ky ielts :6158}

prototype [ˈprəʊtətaɪp] n. 原型;标准,模范 {cet6 ky toefl ielts gre :6270}

download [ˌdaʊnˈləʊd] vt. [计] 下载 {gk :6382}

navigating [ˈnævɪˌgeɪtɪŋ] v. 航行,操纵(navigate的现在分词形式) adj. 航行的,航行中 { :6416}

robust [rəʊˈbʌst] adj. 强健的;健康的;粗野的;粗鲁的 {cet6 ky toefl ielts gre :6419}

feasible [ˈfi:zəbl] adj. 可行的;可能的;可实行的 {cet4 cet6 ky toefl ielts gre :6420}

deployments [ ] n. 部署,调度( deployment的名词复数 ) { :6428}

deployment [dɪ'plɔɪmənt] n. 调度,部署 { :6428}

rigorous [ˈrɪgərəs] adj. 严格的,严厉的;严密的;严酷的 {cet6 ky toefl ielts :6606}

mac [mæk] abbr. 测量与控制(Measurement and Control);机器辅助识别(Machine Aided Cognition);多路存取计算机(Multi-Access Computer);多功能计算机(Multiaction Computer) n. (Mac)人名;(法、德、罗)马克;(英)马克 { :6649}

inaudible [ɪnˈɔ:dəbl] adj. 听不见的;不可闻的 { :6808}

spreadsheet [ˈspredʃi:t] n. 电子制表软件;电子数据表;试算表 { :6818}

algorithm [ˈælgərɪðəm] n. [计][数] 算法,运算法则 { :6819}

algorithms [ˈælɡəriðəmz] n. [计][数] 算法;算法式(algorithm的复数) { :6819}

markings [ˈmɑ:kiŋz] n. 成交量和价格记录 { :6853}

unlocking [ˈʌnˈlɔkiŋ] n. 接通,开放,解锁 v. 开锁( unlock的现在分词 ); 开启; 揭开; 开着,解开 { :6901}

responsive [rɪˈspɒnsɪv] adj. 响应的;应答的;回答的 {toefl ielts gre :6912}

systematically [ˌsɪstə'mætɪklɪ] adv. 有系统地;有组织地 {cet6 :6991}

testers [ˈtestəz] n. 考试者( tester的名词复数 ); 试验装置; 检测器; 华盖 { :7037}

periodically [ˌpɪərɪ'ɒdɪklɪ] adv. 定期地;周期性地;偶尔;间歇 {ielts :7046}

simplify [ˈsɪmplɪfaɪ] vt. 简化;使单纯;使简易 {gk cet4 cet6 ky ielts :7074}

simplifying [ˈsimplifaiŋ] v. 简约,简化(simplify进行时形式) { :7074}

terminology [ˌtɜ:mɪˈnɒlədʒi] n. 术语,术语学;用辞 {cet6 toefl gre :7169}

Turk [tә:k] n. 土耳其马;土耳其人 { :7202}

gee [dʒi:] n. 马;字母G;一千美元(美俚) n. (Gee)人名;(英)吉 v. 向右转 int. 向右!前进!快!(驾驭马牛等的吆喝声) { :7210}

gradient [ˈgreɪdiənt] n. [数][物] 梯度;坡度;倾斜度 adj. 倾斜的;步行的 {cet6 toefl :7370}

awhile [əˈwaɪl] adv. 一会儿;片刻 { :7422}

timely [ˈtaɪmli] adj. 及时的;适时的 adv. 及时地;早 {cet6 ky ielts gre :7462}

validate [ˈvælɪdeɪt] vt. 证实,验证;确认;使生效 {toefl gre :7516}

novelty [ˈnɒvlti] n. 新奇;新奇的事物;新颖小巧而廉价的物品 {cet6 ky toefl ielts gre :7558}

annoying [əˈnɔɪɪŋ] v. 骚扰(annoy的ing形式) adj. 讨厌的;恼人的 {toefl :7731}

revisit [ˌri:ˈvɪzɪt] n. 再访问 vt. 重游;再访;重临 { :7933}

benchmarks [ˈbentʃˌmɑ:ks] n. [计] 基准;标竿;水准点;基准测试程序数值(benchmark的复数形式) v. 测定基准点(benchmark的三单形式) { :8070}

benchmark [ˈbentʃmɑ:k] n. 基准;标准检查程序 vt. 用基准问题测试(计算机系统等) { :8070}

encode [ɪnˈkəʊd] vt. (将文字材料)译成密码;编码,编制成计算机语言 { :8299}

encoded [ɪn'kɒdɪd] adj. [计] 编码的 v. 把…编码(encode的过去分词) { :8299}

validation [ˌvælɪ'deɪʃn] n. 确认;批准;生效 { :8314}

respectful [rɪˈspektfl] adj. 恭敬的;有礼貌的 {cet4 cet6 :8374}

tha [,ti ɛtʃ 'e] abbr. thaumatin 竹芋蛋白 { :8395}

generalization [ˌdʒenrəlaɪˈzeɪʃn] n. 概括;普遍化;一般化 {cet6 toefl :8708}

feasibility [ˌfi:zə'bɪlətɪ] n. 可行性;可能性 { :8983}

captions [ˈkæpʃənz] n. 题注,字幕;插图说明(caption的复数形式) v. 加上标题(caption的第三人称单数形式) { :9117}

convergence [kən'vɜ:dʒəns] n. [数] 收敛;会聚,集合 n. (Convergence)人名;(法)孔韦尔让斯 { :9173}

neural [ˈnjʊərəl] adj. 神经的;神经系统的;背的;神经中枢的 n. (Neural)人名;(捷)诺伊拉尔 { :9310}

activation [ˌæktɪ'veɪʃn] n. [电子][物] 激活;活化作用 { :9314}

salient [ˈseɪliənt] n. 凸角;突出部分 adj. 显著的;突出的;跳跃的 n. (Salient)人名;(西)萨连特 {toefl gre :9408}

realistically [ˌri:əˈlɪstɪkli] adv. 现实地;实际地;逼真地 { :9529}

hindsight [ˈhaɪndsaɪt] n. 后见之明;枪的照门 { :9566}

immature [ˌɪməˈtjʊə(r)] adj. 不成熟的;未成熟的;粗糙的 { :9681}

browser [ˈbraʊzə(r)] n. [计] 浏览器;吃嫩叶的动物;浏览书本的人 { :9689}

browsers [b'raʊzəz] n. [计] 浏览器(browser的复数) { :9689}

nope [nəʊp] adv. 不是,没有;不 { :9734}

kinda [ 'kaɪndə] adv. 有一点;有几分 n. (Kinda)人名;(匈)金道;(捷)金达 { :9840}

estimation [ˌestɪˈmeɪʃn] n. 估计;尊重 { :10164}

cellphone [ˈselfəʊn] n. 蜂窝式便携无线电话;大哥大 { :10445}

redesign [ˌri:dɪˈzaɪn] n. 重新设计;新设计 vt. 重新设计 { :10636}

generalize [ˈdʒenrəlaɪz] vi. 形成概念 vt. 概括;推广;使...一般化 {cet6 ky toefl ielts gre :10707}

generalizing [ˈdʒenərəlaizɪŋ] 归纳 { :10707}

scrum [skrʌm] n. 扭打,混乱;并列争球 vt. 抛(球)开始并列争球 vi. 参加并列争球 { :10789}

microprocessor [ˌmaɪkrəʊˈprəʊsesə(r)] n. [计] 微处理器 {cet6 :10933}

wha [ ] [医][=warmed,humidified air]温暖、潮湿的空气 { :11046}

shortcut ['ʃɔ:tkʌt] n. 捷径;被切短的东西 {cet6 toefl :11087}

academia [ˌækəˈdi:miə] n. 学术界;学术生涯 { :11277}

caveat [ˈkæviæt] n. 警告;中止诉讼手续的申请;货物出门概不退换;停止支付的广告 {gre :11411}

subtitle [ˈsʌbtaɪtl] n. 副标题;说明或对白的字幕 vt. 在…上印字幕;给…加副标题 {ielts :11609}

rudimentary [ˌru:dɪˈmentri] adj. 基本的;初步的;退化的;残遗的;未发展的 {toefl gre :11841}

chrome [krəʊm] n. 铬,铬合金;铬黄;谷歌浏览器 { :11993}

chronological [ˌkrɒnəˈlɒdʒɪkl] adj. 按年代顺序排列的;依时间前后排列而记载的 {toefl ielts :12001}

excitedly [ɪk'saɪtɪdlɪ] adv. 兴奋地;激动地 { :12406}

amazon ['æməzən] 亚马逊;古希腊女战士 { :12482}

mo [məʊ] abbr. 卫生干事,卫生管员(Medical Officer);邮购(Mail Order);方式(Modus Operandi);邮政汇票(Money Order) { :12537}

cosmopolitan [ˌkɒzməˈpɒlɪtən] n. 四海为家者;世界主义者;世界各地都有的东西 adj. 世界性的;世界主义的,四海一家的 {toefl ielts gre :12655}

computations [kɒmpjʊ'teɪʃnz] n. 计算,估计( computation的名词复数 ) { :12745}

computation [ˌkɒmpjuˈteɪʃn] n. 估计,计算 {toefl :12745}

computational [ˌkɒmpjuˈteɪʃənl] adj. 计算的 { :13207}

configure [kənˈfɪgə(r)] vt. 安装;使成形 { :13210}

configured [kən'fɪɡəd] adj. 配置;配置的,组态的 v. 使…成形;按特定形式装配(configure的过去分词) { :13210}

healthcare ['helθkeə] n. 医疗保健;健康护理,健康服务;卫生保健 {ielts :13229}

normalize [ˈnɔ:məlaɪz] vt. 使正常化;使规格化,使标准化 { :13528}

rerun [ˈri:rʌn] n. 再开动;复映的影片;重新上映 vt. 使再运行;使再上映;使再跑 vi. 再运行;再度上演;重新开动 { :13741}

safari [səˈfɑ:ri] n. 旅行;狩猎远征;旅行队 n. (Safari)人名;(伊朗、阿富)萨法里 {ielts :13813}

visualization [ˌvɪʒʊəlaɪ'zeɪʃn] n. 形象化;清楚地呈现在心 { :13979}

retrain [ˌri:ˈtreɪn] vt. 重新教育;再教育 vi. 再训练;再教育 n. (Retrain)人名;(法)雷特兰 { :15253}

foreshadowing [fɔ:'ʃædəʊɪŋ] n. 伏笔;铺垫 vt. 作铺垫;打伏笔;预兆(foreshadow的现在分词) { :15685}

stanford ['stænfәd] n. 斯坦福(姓氏,男子名);斯坦福大学(美国一所大学) { :15904}

APP [æp] abbr. 应用(Application);穿甲试验(Armor Piercing Proof) n. (App)人名;(英)阿普 { :16510}


难点词汇
optimization [ˌɒptɪmaɪ'zeɪʃən] n. 最佳化,最优化 {gre :16923}

Fe [ ] abbr. 成人教育(further education);远东(Far East);灭火器(Fire Extinguisher);外汇(foreign exchange) { :17097}

SE [ ] abbr. 东南方(southeast) { :17431}

radiologist [ˌreɪdiˈɒlədʒɪst] n. 放射线研究者 { :17826}

radiologists [ ] n. 放射线学者( radiologist的名词复数 ) { :17826}

alto [ˈæltəʊ] n. 女低音;男声最高音;中音乐器 adj. 中音部的 n. (Alto)人名;(芬、葡)阿尔托 { :17905}

doable [ˈdu:əbl] adj. 可做的 { :18441}

idiosyncrasy [ˌɪdiəˈsɪŋkrəsi] n. (个人独有的)气质,性格,习惯,癖好 {toefl :18599}

latency ['leɪtənsɪ] n. 潜伏;潜在因素 {gre :18606}

unsatisfied [ʌnˈsætɪsfaɪd] adj. 不满意的;未得到满足的 { :19644}

scrappy [ˈskræpi] adj. 爱打架的;杂凑的;不连贯的;生气勃勃的 {gre :20002}

reverberations [riˌvə:bəˈreiʃənz] n. 反响( reverberation的名词复数 ); 回响; 反射; 反射物 { :20313}

reverberation [rɪˌvɜ:bəˈreɪʃn] n. [声] 混响;反射;反响;回响 { :20313}

brainstorming [ˈbreɪnstɔ:mɪŋ] n. 集体研讨;发表独创性意见 v. 集思广益以寻找;集体自由讨论(brainstorm的ing形式) { :20927}

epsilon [ˈepsɪlɒn] n. 希腊语字母之第五字 { :22651}

radiology [ˌreɪdiˈɒlədʒi] n. 放射学;放射线科;X光线学 { :23300}

signage [ˈsaɪnɪdʒ] n. 引导标示 { :23469}

cant [kænt] n. 斜面;伪善之言;黑话;角落 vi. 倾斜;讲黑话 vt. 把…棱角去掉;使…倾斜;甩掉 adj. 行话的;哀诉声的;假仁假义的 {gre :24482}

iterative ['ɪtərətɪv] adj. [数] 迭代的;重复的,反复的 n. 反复体 { :25217}

typos ['taɪpəʊs] n. 打字错误 { :25550}

axon [ˈæksɒn] n. 轴索,[解剖] 轴突(神经细胞) n. (Axon)人名;(英)阿克森 { :27633}

ve [ ] 委内瑞拉 abbr. 虚拟环境(Virtual Environment) { :28191}

debug [ˌdi:ˈbʌg] vt. 调试;除错,改正有毛病部分;[军] 除去窃听器 { :28755}

dev [dev] abbr. 发展(develop);偏差(deviation);开发人员(developer);设备驱动程序 n. (Dev)人名;(尼、印)德夫 { :28908}

Palo [ ] [人名] 帕洛; [地名] [非、美国、塞内加尔、西班牙、意大利] 帕洛 { :29492}

Hindi ['hindi:] n. 北印度语 adj. 北印度的 { :29713}

tractable [ˈtræktəbl] adj. 易于管教的;易驾驭的;易处理的;驯良的 {toefl gre :30266}

sta [ ] abbr. 无线电台临时使用许可证(Special Temporary Authority);航海训练协会(Sail Training Association);科学技术局(ScienceandTechnology Agency) { :30863}

regularization [ˌregjʊlərɪ'zeɪʃən] n. 规则化;调整;合法化 { :37553}

iterating [ɪtə'reɪtɪŋ] v. 重复( iterate的现在分词 ); 反复申明 { :38640}

iterate [ˈɪtəreɪt] vt. 迭代;重复;反复说;重做 {toefl gre :38640}

reverberant [rɪ'vɜ:bərənt] adj. 回响的;反射的;起回声的 { :43585}


生僻词
aarti [ ] n. (Aarti)人名;(芬)阿尔蒂

advant [ ] [网络] 优点;艾德王;在前面

alexa [ ] n. 一家专门发布网站世界排名的网站;亚莉克莎(女子名)

baidu [ baɪ'du] n. 百度(全球最大的中文搜索引擎)

constan [ ] abbr. Constantine 康斯坦丁(男子名); Constantinople 君士垣丁堡(土耳其港市,现称伊斯坦布尔)

Coursera [ ] [网络] 免费在线大学课程;免费在线大;斯坦福

crowdsource [ ] vt. 众包。这一概念是由美国杂志的记者Jeff Howe在2006年6月提出的; 他给出的定义为一个公司或机构把过去由员工执行的工作任务; 以自由自愿的形式外包给非特定的(而且通常是大型的)大众网络的做法。众包的任务通常由个人来承担; 但如果涉及到需要多人协作完成的任务

Fibonacci [,fibә'nɑ:tʃi] [计] 斐波纳契

firefox [faifɔ:ks] n. 火狐浏览器

google [ ] 谷歌;谷歌搜索引擎

millennials [mɪ'leniəl] adj. 一千年的;一千年至福的 [网络] 千禧世代;千禧之子;千禧一代

nevermind ['nevə'maɪnd] n. 别介意,无所谓

siri [siri] n. iPhone 4S上的语音控制功能; [人名] 西丽

siris [ ] [计]= Scanning InfraRed Inspection System,扫描(式)红外检查系统

someth [ ] [网络] 官方完整

startup ['stɑ:tʌp] n. 启动;开办

youtube ['ju:tju:b] n. 视频网站(可以让用户免费上传、观赏、分享视频短片的热门视频共享网站)


词组
a mac [ ] None

a mo [ ] [网络] 小莫;桂林山村;长江山城

Amazon Mechanical Turk [ ] [网络] 亚马逊土耳其机器人;亚马逊人端运算平台;亚马逊的土耳其机器人网站

audio clip [ ] [网络] 音频剪辑;音频复制文件;音频剪切

back prop [ ] 后撑;后支柱

be respectful of [ ] 尊敬,尊重…

built-in microphone [ ] 机内话筒

chat with [ ] [网络] 和…聊天;与…闲聊;与某人聊天

cross validation [ ] un. 交叉验证;互相证实;交叉证实 [网络] 交叉检验;交叉验证面板;交互验证

detection system [ ] un. 检测系统;探测系统 [网络] 侦测系统;检测系统试剂;检测体系

engineering biology [ ] [生物] 工程生物学

for the sake of simplicity [ ] 为了简单起见

from scratch [frɔm skrætʃ] adj. 从零开始的;白手起家的 [网络] 从头开始;从头做起;从无到有

full arc [ ] 全圆弧

iterative process [ ] un. 迭代过程;迭绕法 [网络] 迭代程序;迭代估计控制;反复式

lane marking [ ] 车道线

learning algorithm [ ] [网络] 学习演算法;学习算法;学习机制

light bulb [lait bʌlb] n. 灯泡 [网络] 电灯泡;右下灯泡;白炽灯

lighting bulb [ ] 照明灯泡

Mechanical Turk [ ] 土耳其机器人

microphone noise [ ] 话筒噪声

mobile app [ ] 手机应用 移动应用

movie clip [ˈmu:vi klip] [网络] 影片剪辑;电影剪辑;影片片段

network latency [ ] [网络] 网路延迟;网络延时;网络延迟

neural network [ˈnjuərəl ˈnetwə:k] n. 神经网络 [网络] 类神经网路;类神经网络;神经元网络

neural network architecture [ ] 《英汉医学词典》neural network architecture 神经网络构筑学

neural networks [ ] na. 【计】模拟脑神经元网络 [网络] 神经网络;类神经网路;神经网络系统

noisy environment [ ] [网络] 噪声环境;含噪声的环境;嘈杂的环境

one parameter [wʌn pəˈræmitə] adj. 〔数〕单参数的 n. 单参数的

optimization algorithm [ ] un. 最佳化算法;最优化算法 [网络] 最佳化演算法

Palo Alto [ ] [网络] 帕洛阿尔托;加州帕洛阿尔托;帕罗奥多

plug in [plʌɡ in] v. 插入;插插头;插上 [网络] 插件;连接;插上电源

pro and con [ ] na. 赞成与反对 [网络] 正面和反面;争论;正反双方

PRO and cons [ ] [网络] 羽绒被定棉胎好;学士后医学系制度之优缺点

reverberant sound [ ] 混响声

reverberant sounds [ ] 《英汉医学词典》reverberant sounds 混响声

salient point [ ] un. 折点;凸起点;凸点 [网络] 显点;显著点;特征点

salient points [ ] na. 特征 [网络] 显点;特征点;提取凸点

setup process [ ] [网络] 法则

simple algorithm [ ] 单纯演算法

speech bubble [ ] 话泡泡(漫画中圈出人物所说的话的圆圈)

standard terminology [ ] [网络] 标准术语;标准术语系统;标准用语

support vector [ ] [网络] 支持向量;支撑向量;支持向量机回归

swap out [swɔp aut] v. 交换出 [网络] 交换出内存;换出了;换出到磁盘上

the algorithm [ ] [网络] 算法

the amazon [ðə ˈæməzən] [网络] 亚马逊河;亚马孙河;亚马孙河热带丛林

to debug [ ] 除错, 排除故障;排除错误

to deploy [ ] [网络] 部署;配置

to encode [ ] [网络] 编码;内码;骗码

to summarize [ ] [网络] 总结;总结来说;概括

to transmit [ ] [网络] 导;传给;传送

validation set [ ] 验证集

variance in [ ] ...的变化

vector machine [ ] un. 向量机 [网络] 向量机模型;向量计算机

voice activity detection [ ] 话音激活检测

WEB BROWSER [web ˈbrauzə] n. Web浏览器 [网络] 网络浏览器;网页浏览器;网路浏览器

word detection [ ] [计] 字检测


惯用语
all right
and i think
and so
and then
and then i think
and uh
as you
go ahead
i don't know
i think
in a
it turns out that
let's see
thank you
turn on
what's this
which is
you know



单词释义末尾数字为词频顺序
zk/中考 gk/中考 ky/考研 cet4/四级 cet6/六级 ielts/雅思 toefl/托福 gre/GRE
* 词汇量测试建议用 testyourvocab.com