Working out

Okay, what really happens down the road to all our jobs?welcomeWe know that automation replaces many human jobs and generates many others, and that artificial intelligence will accelerate this creative destruction. Historically, the default view among business and technology leaders, supported mostly by hand-waving, is that this unstoppable march will bring a wealth of new jobs, if only the masses somehow can receive proper technological education.

It’s hard to assess the recent historical record on job loss versus gain, although today’s New York Times offers an interesting take. And while we can easily spot job losses, new jobs created by machines, “almost by definition, are harder to imagine,” as MIT economist Erik Brynjolfsson pointed out in a session at the American Association for the Advancement of Science (AAAS) annual meeting in Boston on Saturday.

But in the past couple of years the public discussion has grown more worried, with one dark perspective on implications well described in a poorly titled essay by Rutgers historian James Livingston.

At the AAAS session, Harvard computer scientist David Parkes presented some relevant thoughts from the 100 Year Study on Artificial Intelligence project. Here are a few quotes from the study’s report on AI and real life in 2030, published last September:

  • “AI will gradually invade almost all employment sectors, requiring a shift away from human labor that computers are able to take over.”
  • “To date, digital technologies have been affecting workers more in the skilled middle, such as travel agents, rather than the very lowest-skilled or highest skilled work. On the other hand, the spectrum of tasks that digital systems can do is evolving as AI systems improve, which is likely to gradually increase the scope of what is considered routine. AI is also creeping into high end of the spectrum, including professional services not historically performed by machines.”
  • “A spectrum of effects will emerge, ranging from small amounts of replacement or augmentation to complete replacement. For example, although most of a lawyer’s job is not yet automated, AI applied to legal information extraction and topic modeling has automated parts of first-year lawyers’ jobs. In the not too distant future, a diverse array of job-holders, from radiologists to truck drivers to gardeners, may be affected.”
  • “As labor becomes a less important factor in production as compared to owning intellectual capital, a majority of citizens may find the value of their labor insufficient to pay for a socially acceptable standard of living. These changes will require a political, rather than a purely economic, response concerning what kind of social safety nets should be in place to protect people from large, structural shifts in the economy. Absent mitigating policies, the beneficiaries of these shifts may be a small group at the upper stratum of the society.”
  • “Longer term, the current social safety net may need to evolve into better social services for everyone, such as healthcare and education, or a guaranteed basic income. Indeed, countries such as Switzerland and Finland have actively considered such measures. AI may be thought of as a radically different mechanism of wealth creation in which everyone should be entitled to a portion of the world’s AI-produced treasure.”

At another packed AAAS session, Alta Charo, professor of law and bioethics at the University of Wisconsin at Madison, gave a masterful quick summary of the history and findings of the report on human genome editing from the National Academy of Science. Released last week, this report’s recommendations drew plenty of public attention—far more than last fall’s AI in 2030 report, although AI will have much greater impact in the next decade or two or three.

An Engine for solving societal problems

MIT’s accelerator brings an incubator and funding to startups that matter.

the_engine_logo

“One of my frustrations as an academic is that over the last twelve years we’ve produced a lot of really useful methods and techniques, and almost none of them has been put into practice,” one prominent MIT professor told me earlier this year. “This is not an unusual problem for academics. But it’s frustrating to have things that you know could help and they’re not helping.”

Generating the intellectual property (IP) is only the very first step on the road to the real world. Established companies often are not very interested in IP, even game-changing IP. They are more likely to want prototypes, and people who know how to build the prototypes.

They want, in brief, to work with startups.

That’s one reason why this professor launched a startup. It’s also one reason why MIT actively spreads the entrepreneurial gospel to students and staff who might not have considered it a few years back, and keeps deepening its “environmental ecosystem” of competitions and advisory networks and resources like the Startup Exchange.

And it’s the thinking behind the Engine, the startup accelerator that MIT president L. Rafael Reif announced yesterday. The Engine will combine an incubator with funding for startups focused on real needs.

“When it comes to the most important problems humanity needs to solve — climate change, clean energy, fresh water and food for the world, cancer, and infectious disease, to name a few — there is no app for that,” as Reif explained in the Boston Globe. “We believe the Engine will help deliver important answers for addressing such intractable problems — answers that might otherwise never leave the lab.”

Venture capitalists do a reasonable job of funding many tech companies, but very few VCs are interested in startups that may take more than five years to pay off. The Engine won’t sponsor quick-turnaround firms, or companies that join the thundering herds of marketing middlemen, or oddities like the outfit that claims to deliver wine matched to your DNA.

Instead the funds might go to biotechs, like Oxalys, which do very well if they can even get their drug candidates into first clinical trials within a few years. Or makers of industrial products, like Dropwise’s energy-saving coatings for power plants, which manufacturers probably will adopt quite slowly because that’s how that industry works. Or any number of truly innovative, truly needed products and services.

It will take a decade or more to see how the Engine’s bets turn out. Many will fail. But these are bets we need.

The write stuffing

Backlit_keyboard

When I graduated from high school, all I really knew professionally was that I wanted to write on many topics. Last weekend when people at my high school reunion asked politely what I wrote about, I did find myself saying, many topics—in fact, way more now than when I worked as a staff journalist. Okay, I’m not covering the full human condition. Much of the universe is unexplored. But so far this year I’ve done stories about medical hackathons and crowdsourced scientific challenges, global data security and global financial crises (still separate topics so far!), drug development crises, the future of suburbia, steam power, gene therapyagricultural particulates, the challenges of small data in healthcare, chemical sensing on a chip, employee cross-trainingurban carbon dioxide release, jet engines, zebrafish brains, surgery by telemedicine and robotics manufacturing, among others.

Man petabytes dog

husky

One of the earliest stories I wrote about genomics past the gee-whiz aspects of the Human Genome Project covered the first whole-genome sequencing of a dog. Kerstin Lindblad-Toh of the Broad Institute patiently explained the project to me, and scientists who used dog models to study inherited blindness told me why they were more than excited about the prospects.

More than a decade later as I’m putting together a special report on big data for Nature, the genomic revolution has marched ahead, well, much as predicted.

The cost of genomic sequencing has dropped arguably faster than any other technology in human history. Research initiatives that most of us haven’t heard about are gathering genomic data on hundreds of thousands of people. This flood of data is multiplied with data from proteomics and other omics now scaling up to the genomic scale. We talk casually about petabytes (millions of gigabytes). Data scientists, many of them coming in from fields outside biology, are integrating these data and making some astonishingly good predictions about what drugs might work for a given condition, without needing any new wet-lab work. We’ve seen wonderful progress in stem cells and cellular models and genetic engineering tools. And this revolution is on television, also websites, social media and an entirely sufficient plenitude of TED talks.

Not so much in the clinic, though.

Of course omic research on many diseases is starting to pay off for actual patients—for example, The Cancer Genome Atlas has spun off clues for real advances in many cancers—and its grand march points straight ahead through enormous but movable objects.

But clinical steps are slow. Part of the reason is the sheer complexity of disease, for instance the ways cancers duck and weave to dodge treatments. And, of course, clinical trials can’t be rushed.

Last week I asked one neuroscientist why we still lack drugs that treat the causes of neurodegenerative diseases, as opposed to their symptoms. She responded, reasonably enough, that it takes years to build better lab models of the disease and push findings from those models into the long tunnel of pre-clinical work toward trials. She expected that some of the compounds coming from her work will help. She didn’t predict home runs.

But we haven’t lost the gee-whiz discoveries and our faith that they’ll end up in the clinic in our lifetimes. My favorite: Scientists can take a human skin cell, bombard it with select small molecules until it morphs into a reasonable facsimile of an insulin–producing cell (a notoriously fickle beast) and produce such cells in the millions. Maybe those cells will arrive in the next decade, bringing actual cures. And although I don’t follow discoveries in dog proteomics, I see that University of California/Berkeley researchers have restored vision to blind dogs via genetic therapy. Progress, yes. Dogged research!

 

The Seoul of a new innovation machine

KAIST

South Korea’s spending on research and development is climbing up to 5% of its gross domestic product. That’s the highest rate in the world, almost twice that of the United States.

Writing a quick snapshot of Korean science for Nature, I keep coming across such striking contrasts.

Heightened R&D spending is one foundation for the push for a “creative economy” that President Park Geun-hye launched when she took office two years ago. A centerpiece of her agenda, this initiative aims to boost the creation of innovative products and services, especially by the smaller firms that often struggle for air in an economy dominated by giants such as Hyundai and Samsung.

The quest for a creative economy builds on many multi-year, multi-billion-US-dollar projects, among them the International Science and Business Belt. This hub for science, technology and business is now rising in Daejon, a city an hour south of Seoul by high-speed train that is already crammed with both government and industry research centers.

How well will these grand governmental top-down innovation programs pay off?

Well, who knows?

But I’m impressed by not just the scale but the speed of some of these bets.

One example comes from the Korea Advanced Institute of Science and Technology (KAIST), seen above. Launched in the 1970s as a kind of Korean version of the Massachusetts Institute of Technology, KAIST enrolls about the same number of students as MIT with a third the budget.

Like MIT, KAIST is investigating “flipped classrooms,” in which students watch lectures online and then go back and forth with professors and each other in the classroom—a more interactive alternative that seems to work well for fairly obvious reasons.

MIT has come up with quite wonderful technology for such teaching (supplying the platform for edX online courses). It’s going ahead with a few great courses and thoughtful research about optimizing the benefits thereof. But KAIST is adopting flipped classrooms much more quickly, planning to deliver no fewer than 800 such classes two years from now.

Mind the gap, manufacturers

Say you’re a professor at a major research university. You’re brilliant, of course, and well-funded. Some of your well-guided hotshot grad students and postdocs create a technology that shrieks out for commercialization, and the university’s intellectual property folks plunge into patenting.

Maybe the hotshots then get together with a veteran executive or two and sell the idea to a venture capital firm. Their startup is off and running, and the world awaits with joy.

Or maybe the venture capitalists are otherwise occupied that month, the hotshots wander off to the next great opportunity and the idea sits on the shelf.

All too often, professors tell me, the major manufacturers who might really exploit the technology show no interest in bringing it to market from that stage. Their development ecosystem doesn’t work like that—they want to buy the startup when it has shown progress commercializing the work. They want not just patents but people, understandably enough.

This does make you wonder, though, whether more manufacturers should consider extending their own research groups a little further down the food chain to cherrypick a few of the best available intellectual properties and bring them forward much as a startup would. Maybe a few million dollars invested in this form of intrapreneuring would pay off very, very well down the road.

Big research ideas in five minutes

The Cambridge Science Festival’s launch event, Big Ideas for Busy People, presented quick snapshots of recent work by 10 researchers “who are established stars or stars on the rise,” noted John Durant, director of the MIT Museum and the festival.

The topics ranged from disaster preparedness to the rise of atmospheric oxygen and from dancing with bionics to how today’s slot machines are designed to addict their patrons. Each researcher raced to summarize their ideas and results as a five-minute clock ticked down, and then answered thoughtful questions from an audience of hundreds in First Parish Church on Friday evening.

Some notes and quotes:

“Why do we so often make decisions that we later regret?” asked Harvard’s Daniel Gilbert. “We have a fundamental misperception of time; we will change much more than we predict. It’s an illusion we all have—that we’ve just become the people we will be for the rest of our lives.”

“The bad news is yes, there are more disasters and the impact of disasters is increasing,” said Paul Biddinger of Massachusetts General Hospital. Working to minimize their effect, “we’ve learned what works and doesn’t work, and what does work is practice, practice, practice.”

Lawrence Candell of MIT Lincoln Labs showed a visual surveillance system under development that integrates 48 cell-phone-like video cameras to provide powerful 360-degree images and can automatically follow items such as moving cars. As such systems become commercialized, they could find many uses beyond surveillance, for instance at sport arenas such as the Boston Garden. “You could film and watch your own Boston Celtics game,” with the ability to narrow in on the actions and players that interest you most, Candell remarked.

Elliott Rouse of the MIT Media Lab described the development of a bionic ankle for Adrianne Haslet-Davis, a dancer who lost part of her lower leg in last year’s Boston Marathon attack, and showed a video of her dancing again. “We can put people back in places they thought they’d never have again,” Rouse said. “It’s only a matter of time until bionic limbs are better than the ones we have.”

Harvard’s Tadashi Tokieda demonstrated a “chain fountain”—pull a thin chain out of a plastic cup and let go of the chain and it will flow up before turning back down again—and explained a likely mechanism with a stick. “I like to explore surprises that are amusing and interesting to non-scientists and scientists,” he added. Asked where he finds such surprises, Tokieda said they are everywhere around: “There’s an enormous amount of universe.”

Many Boston-area plants now blossom 10 days or more earlier than they did in the 1850s, according to records kept by Henry David Thoreau and others, said Boston University’s Richard Primack. Bees and butterlies also often emerge much earlier in the spring, but migrating birds often arrive only a few days earlier than they did back then. These changes in schedule raise worries that “birds could miss this great pulse of insects in the spring,” he pointed out.

Amanda Randles of Lawrence Livermore Labs presented work that models the fluid dynamics of blood plasma with the movement of red blood cells to help study cardiovascular disease for individual patients using their MRI and CT scans. Such an analysis currently takes hours on one of the world’s largest supercomputers, but she hopes that within a few years, “it becomes something physicians can do on a real-time basis in the office.”

“I don’t know why we long so for permanence, given the fleeting nature of things,” remarked MIT’s Alan Lightman. “Our consciousness makes us feel we are immortal beings,” he added. “Yet Nature is screaming at us as the top of her lungs that everything is passing fast.”

And MIT’s Tanja Bosak skimmed through the mysterious multi-billion-year timeline in which Earth’s oxygen levels rose from almost nothing, noting that jellyfish-like fossils gave one indication of their rise as of 560 million years ago. “If you ask me why we have 20% oxygen in today’s atmosphere, I have no idea,” she acknowledged.

All the speakers seemed to enjoy their ten minutes of public science fame.