AI Will Make Healthcare More Human Than Ever. Here’s How.

AI Will Make Healthcare More Human Than Ever. Here’s How.

Originally published in Health:Further

With the rise of robotics and AI across virtually every industry, the fear of “will a robot take my job?” is more pressing than ever. In the healthcare world, at least, that future couldn’t come soon enough.

The U.S. healthcare system is advanced in so many ways, yet one of the most glaring problems that still plagues it is a lack of interoperability, or as we like to say, the lack of the “Internet of Healthcare (IoH).” In the literal sense, the Internet of Healthcare means connecting networks—connecting health systems, connecting data, connecting patient information and more. It means turning healthcare from a series of intranets connected by fax machines, to a true internet connected by AI as the “router.”

That’s a far cry from the healthcare experience we face now. Today, just getting into a hospital requires mountains of paperwork, faxes, and family medical histories that often take longer to fill out than the hospital visit itself. In one of the most vulnerable and human professions that exists, patients are left feeling like just a number.

The reason this exists is because our existing healthcare technologies were not built to share data. They were built as fortresses to protect the data of patients at each instance, and to make sure that data was available only within the walls of that system.

 

 

As a result, humans had to take on the job of the router, the data processor, the transmitter. This phenomenon has shifted the hours spent by humans from being in front of patients to being in front of computer screens, logged in to many user interfaces, shepherding patient data into the right fields. Licensed caregivers’ quality of life have been pummeled by this new role, and the consequence comes in the form of burnt-out employees, skyrocketing administrative costs, less human-to-human experiences, and most importantly, subsequent decreased quality of care.

It’s easy to throw stones at the software that exists and excoriate them for their lack of data sharing capabilities. However, they were just a product of the requirements they had to meet to become certified and meet a rather daunting set of standards imposed by the federal government. It’s not clear that data sharing should have been introduced into the requirements framework earlier or more aggressively, and it’s not clear if diagnosing that now does us any good. The reality that exists with healthcare technology is that we now have to figure out how to scale that technology to the next level.

We think AI is the solution to scaling that technology, to taking the robot out of the human and propelling human potential further than we’ve ever seen it.

So, what does the world look like when we “take the robot out of the human?” I won’t comment on what it will look like in other industries, but here’s how I see it playing out in the healthcare industry.

1. Insured patients no longer incur unexpected out-of-pocket costs because of registration issues or human error. Instead of filling out insurance information at intake, AI helps hospitals understand patients’ coverage before they even set foot through the door. The same people who spend their days inputting information into EMRs can focus on actually talking to, and understanding, the patients who are there to see them.

2. Patients’ identities are reconciled across multiple departments, even multiple hospitals. By knowing exactly who is coming through the door, and why, AI helps hospitals cut down on doctor-shopping and drastically reduce overdoses on prescription medications.

3. Ride-sharing vehicles are dispatched to the patients who need them the most. Instead of relying on patients to find their own way to the hospital, AI detects which patients have the greatest no-show risk, then dispatches a vehicle to get them the care they need, when they need it.

4. Patients are seamlessly matched to cutting-edge technologies and clinical trials. Finding clinical trial participants can be like finding a needle in a haystack, and it can be the difference between life and death for tens of thousands of people every year. AI gives us the framework not just to enrich those lives, but to save them altogether.

5. Clinicians no longer spend six hours a day entering data into an EMR. Instead, AI transcribes notes from each patient exam and submit them for approval. Burnout decreases, energy improves, and clinicians get to spend their time doing what they care about most.

What’s common about all of those experiences? Humans aren’t out of the picture. In fact, they’re more a part of the picture than they are today. With AI as the router, humans finally have the time, the energy, and the bandwidth to focus on what matters most: the patient.

The current zeitgeist around AI is trepidation about whether or not it will take human jobs, but I believe we will be able to achieve so much more as a humankind with the assistance of AI. It’s true, AI will certainly take parts of our jobs, reconfigure our jobs, but that’s exactly what we need in healthcare today.

We can use AI to take over the Button Olympics that humans are enduring in hospitals across the country. AI can transmit the data where it needs to go, and use global awareness to ensure the right data goes to the right place. AI can turn the human-powered Internet of Healthcare into a technology-powered internet, without having to overhaul the immense infrastructure that has already been put into place. With AI doing all of these things, humans can focus more on creativity and empathy, on the skills that no machine can recreate.

AI largely is not trying to replace humans, just trying to replace some of what humans do. Imagine what healthcare would be like if we could take the robot out of the human. Think about how much better off, and happier, and more fulfilled, the workforce would be. That’s the world I am dedicated to building.

Will Work for Kilobots

Will Work for Kilobots

We are moving into a new world very quickly. A world where many of our coworkers will be digital. AI bots, or SCILBOTS as I call them, will become a meaningful slice of our current workforce and will provide the efficiency we, as humans, need to scale. Here’s the thing…how will we pay these bots?

We think we have an answer. Like electricity in your home consumes or is metered by kilowatt hours, our bots consume and are metered by kilobot hours. What is a kilobot? A kilobot is 1,000 weighted actions of an AI bot. Actions are the most discreet measurement of work of an AI bot. Imagine a “click” or “entering text” as a discrete action. More complex actions like using computer vision to recognize hand-writing is also discreet action, but since it’s a heavier lift it consumes kilobot hours at a higher velocity.

Similar to your home, when you plug in a night light it makes the kilowatt hour meter spin. When you plug in a refrigerator, it spins faster. When an AI bot is simply automating a task with RPA on a user interface the kilobot hour meter spins slowly. When an AI bot uses a neural net to make a complex decision, the meter spins faster.

We created this methodology for pricing AI bots because, frankly, nothing better existed. The current bot marketplace is full of unscalable economic models like charging per bot, per software seat, per license. We don’t think those methods are sufficient for the wave of AI we are about to encounter. We think organizations will treat their AI bot workforce like infrastructure. Like a utility. Similar to electricity. We wanted to create a way for companies to implement AI across every element of their business and have one number to calculate spend and one metric to understand ROI. Companies will soon look at their monthly or yearly kilobot hour consumption and compare that to the ROI they see from their AI bots. ROI will be measured in things like: increased efficiency, decreased errors, better customer experience, increased quality, and getting more out of their human workforce by scaling them from rote tasks to more sophisticated cognitive tasks.

If your organization doesn’t have an AI bot strategy, we can help. If you haven’t thought about how AI bots will augment your humans, we can help. The time is now to start integrating AI bots into your strategy and to start budgeting for kilobot hour consumption. If you wait more than 12 months, you’ll be behind the curve and you’ll probably be catching up with your competitors who have already started putting AI bots to work.

Our AI bot, Olive, has been hired by dozens of companies already. By next year, it will be hundreds. By the end of 2019, thousands. By 2020, there will be over a million AI bots working side-by-side with human workers.

We invented the kilobot hour, but we don’t expect to be the only company that adopts it. We welcome other companies that are building AI bots to consider the kilobot hour as their pricing model. We’ll all compete on rates and create a true competitive marketplace. This is a new market and it needs leadership now to create enduring economic models. We’re happy to be that leadership at CrossChx and thrilled to be part of inventing the future.

We imagine a future where scaling humans is commonly understood by business leaders and the AI bot workforce helps bring a new level of efficiency and modernity to clunky enterprises where humans are spending too much time doing things meant for machines. The first AC electric meter was invented just a year after Tesla created AC power. We’re following the same track and hope the world imagines the same future we do.

In AI, Are We in 1977, 1980, 1998, or Beyond?

In AI, Are We in 1977, 1980, 1998, or Beyond?

My mind races when I think of all the similarities between the past rise of computing and the current rise of artificial intelligence. It is amazing to me as I watch what seems to be the same story with different characters playing out all over again. Everything is obviously not the same but there are some really solid key tenet similarities. The biggest difference, I think, is the fact that most people are aware that it’s happening because they’ve seen how fast technology happened before with computing. At least I hope everyone is aware.

Enter stage left, IBM. Yep. Just like in the good ole’ 1960s. They had the lock on computing through their mainframe market. These giant, room sized machines produced magical outputs that would one day turn our future into a dystopian sci-fi novel. Right? Well it was hard to know because very few people actually saw them, or used them, or understood them. But they sounded very impressive and of course the computing technology certainly was. It was the mainframe that kicked off our destiny with computers. It was a glimpse into our future relationship with intelligent machines, but it wasn’t the mainframe that changed the world.

IBM has a new mainframe. They call it Watson and it is does AI. Have I ever seen one? Nope. Just on TV when it played Jeopardy. Is it big…probably.  Expensive…you bet. Are lots of people allowed to program on it…nope. But wait…in all fairness, they do have the Bluemix application developer capability that exposes Watson skills. Sort of like a modern day IBM 5150.  The 1981 5150 was IBM’s attempt to enter the PC market after Apple had sold 6M Apple II’s since 1977. The 5150 was the best example of IBM “shrinking” their mainframe capabilities and putting it in the hands of real people since a couple vaporware flops (not FLOPS) with the SCAMP. They sold about 100,000 units. Not bad, but not Apple. One thing it did succeed to do was get Microsoft its first big piece of market share. So…are we in 1982 with AI? You have Watson which is getting beat up like crazy right now for allegedly being all sizzle and no steak. You have a growing number of companies diving into the AI space. Similar in volume to how many computer were jumping into the PC market in the early eighties. Maybe 1982, but let’s unpack this some more.

Let’s talk about what made the PC so powerful and seminal in computing history. I think it was three things: they became accessible, relatable, and programmable. Wow. I just blacked out for a minute right there. That was genius.

Okay, so the PC became accessible. That means normal humans could get their hands on it. They could put it in their house without having to sell their kidney. Cool. It can be argued that the 1981 Sinclair ZX81 fit that mold. They were priced at $99 and sold 600,000 units. You could also argue of course that the 1977 Apple II was accessible. They sold 6M of them at a price of around $2,000. I’ll settle on accessible in early 1980’s.

Now let’s talk what relatable means. Relatable means you can use the PC for things that you do on a daily basis. Things at work, home or school. Things like writing papers, doing spreadsheets, or playing a sick video game. PC’s aren’t relatable when they are only used for arcane tasks. Likewise, AI isn’t relatable when it’s used for arcane tasks.

Finally, they became programmable. Not just customizable. Programmable. You could start to make them better and more powerful by creating tools for them. When humans could start making tools for computers to use (software) at scale, it changed everything. The more people with access to the computers, the more programmers were made and the more users of the tools the programmers made. It was and still is a powerful and virtuous cycle.

Let’s use those three elements to assess where AI is today. Maybe that will help us figure out which year we’re in as it compares to computing history.

We’ll start with accessibility. How many people have access to AI? Well…a whole bunch. I mean it’s kind of everywhere. But more specifically, I think we can confidently call Siri and Alexa AI, right? I mean, at least they satisfy some Turing qualities, and deep in the bowels of their code are some neat neural nets for learning and some other cool machine intelligence stuff. Amazon has sold about 8M Echos. They’re not the cheapest thing in the world but they’re not crazy expensive. I have one…but not two. So I’m going to say it’s at 1977 in terms of accessibility. It’s super important to note that besides these voice assistants, AI isn’t that accessible. It has a long way to go.  Maybe the iPhone X will have an impact with its on board GPU.

Next, let’s look at how relatable we are with AI. Like above, the Alexas and Siris of the world are super relatable. However, most of the AI companies out there are more focused on things that aren’t so relatable. AI certainly hasn’t invaded our lives and impacted the routing things we do everyday. Given the arcane nature of most AI solutions out there, with the exception of Alexa and Siri, I’m going to say we’re in 1977 with relatability too.

Finally, let’s think about programmability. There are some tools out there. Tensor Flow, Tesseract, OpenCV, etc. are pretty available. It’s actually pretty straightforward to build a neural net. But man, those GPUs are freaking expensive. How are we supposed to have a million programmers building unsupervised learning when the can’t access the compute power? That’s a problem. That’s like a 1972 problem. Hurry NVIDIA. We need GPU in every computer stat. I also think their needs to be some new IDEs. We are in the early days here at AI software engineering from a tools perspective. I think this will happen quickly but we are still mid 80’s from that perspective. Final ruling: Libraries = 1998, Hardware = 1977, IDE = 1983. Average that and we can put programmability in 1986.

Now we average all three categories together and see where we are: Accessible = 1977, Relatable = 1977, Programmable = 1986. Average that out…and we are in 1980.

Welcome to 1980. The Apple II just rocked the world of personal computing and are about to hibernate again until 1998 when the come back from the dead with the iMAC. IBM is about to retaliate with the IBM 5150 to little fanfare. Underdogs Commodore and NEC is about to be IBM’s launch for the first half of the decade selling over 30 million machines. And this year, Tim Berners-Lee is about to invent hypertext. Get ready.

Wonder what this year in AI will look like and how many similarities there will be?

Hacking Time: The Journey to AI

Hacking Time: The Journey to AI

Once I figured out time was the most valuable resource on Earth, it changed the way I thought about everything.

At NSA I was part of a program where general concept was to get intelligence to the war fighters real-time at orders of magnitude faster than ever before. It was one of the most effective programs the intelligence community has ever seen, in my view, and it was fundamentally about reducing time and creating speed to insight. It was about giving time back to warfighters. It was about giving them the tools and information to make decisions faster.

My first company built tactical cellular networks for the military. It was a software company with some hardware design and great deal of systems engineering and solution architecting. You can describe the product we offered and the resulting soul of the company and culture in many different ways. But ultimately we arbitraged time. We built systems that provided faster intelligence. Faster answers. Faster information. If we could beat time by an order of magnitude then we were valuable.

When we started CrossChx, we wanted to get comprehensive health information about patients to doctors faster. We started by identifying patients faster. We gave time back to the registrars. We gave time back to the patients. We did this with a product called SafeChx.

We then focused on giving time back to patients and registrars by figuring out a smoother way for patients to sign in when they show up to the hospital. We called that product Queue.

Next we wanted to crush the time waste patients and providers experienced when they showed up to an appointment and had to fill out those dreaded paper forms.  We thought patients shouldn’t have to keep filling out the same paper forms over and over and over again.  We wanted their experience of checking in to a medical appointment similar to checking in to a flight.  So we created an app called CrossChx Connect (still available on the app store) that let patients fill out their medical history and insurance information for them and their family one last time, and then share that information with any doctor or hospital they wanted.  We wanted to give patients and healthcare providers their time back.

Now, I think we’ve hit the holy grail of time hacking.  Over and over again in hospitals we saw repetitive, mundane tasks being done at extremely high volumes by humans that should be spending their time doing other things like talking to other humans (patients).  We saw them doing things that an intelligent router or better interoperability or at least better software should have made obsolete years ago.  As we peeled back the layers of the onion, we realized that these routine tasks were pervasive.  In hospitals, 40% of the costs are attributed to employees that perform administrative tasks.  And even with all that investment, these tasks are being done less than perfectly.  Mistakes happen, there’s not enough time to do them all, things fall through the cracks, backlogs haunt every department…the list goes on and on.  Most of the 5,000 hospitals in the country are struggling to exist.  They are fighting razor thin margins and clerical errors or not being able to get to all the routine tasks makes survival even harder.  We realized that we needed to fix this problem.  We wanted to give humans their time back. To solve this problem, we created Olive.  Olive is an employee.  She logs into all the same software a human uses the same way a human does.  She performs these high volume, repetitive, mundane, tasks just like a human does.  However, she does it with ease.  She never lets anything slip through the cracks.  She never makes errors. She never gets sick or takes vacation.  Olive is an artificial intelligence bot.  She’s a SCILBOT, as I wrote about earlier.  She breezes through hundreds…thousands of tasks with ease.  She emails her boss at the end of the week summarizing all the work she accomplished and provides insights on things her boss should be paying attention to or suggests how to do things better.

We launched Olive in April 2017.  She’s being adopted at an incredible rate with over 40 organizations hiring her as the time of this post.  She’s truly a powerful tool and a much needed technology for healthcare.  Think of all the time she is going to give back to humans. Olive will give more time back to humans than anything I’ve ever built before.  Get ready humans.  You’re going to have a lot more time to do a lot bigger things.

Computer Vision — Ingesting Data Like a Human

Computer Vision — Ingesting Data Like a Human

Imagine you have a robot with a great set of mechanical eyes. These eyes can see and interpret things in the physical world. One of the things this robot can see and interpret is software user interfaces. By looking at a screen, the robot can tell where the submit button is, where to hit like, and where to swipe left or right.

Now imagine this robot’s eyes can just as easily train on a 72-inch screen on a wall in your living room. And these eyes are so good, the robot can see every pixel. Now imagine that robot wants to know Wikipedia. I mean it wants to know ALL of Wikipedia. The robot could put Wikipedia up on that 72inch screen and read (very quickly) all the content, methodically clicking on every single link and turning over every single rock. This could work. The robot would learn all the information. But it may not be the most performant and efficient way to transfer this data. But let’s say it goes ahead and reads Wikipedia this way. How would it store the data? In what structure?  Now imagine it can store the data by performing entity extraction on ingestion “as it reads it” and then form a massive entity graph of people, places, things, and concepts.

Okay so that created and interesting picture in your mind I hope. I hope you imagined a robot standing in front of a giant tv and reading all of wikipedia as fast as superhumanly possible and storing all the information in a beautiful graph, thus storing the data much like the human brain does.

With that picture still in your mind let’s stay with the cool graph brain but redo the way the robot ingests the data. Imagine instead we took all the Wikipedia data and turned into a bitmap across a 72inch tv. Now imagine that bitmap changed every .1 seconds. Imagine then how much faster the robot can ingest all that data. The robot, after staring at a screen for a few minutes, can ingest all of Wikipedia.  If the robot just hooked up to wifi couldn’t we just zap all that data into the robot’s brain? Sure. That could work. As long as the robot could find and accept the data feed. But think about how humans ingest information. Through our senses. If we want to make Turing-like AI that closely resembles humans and can think with the same sophistication, shouldn’t we try to mimic the way a human ingests information? Maybe the data rates of a data feed over wifi is faster. But certainly seeing data through the eyes and hearing data through the ears is more ubiquitous. And isn’t it really just a matter of time until we figure out how to pass information faster through computer vision that we can today through wireless data?

What if two robots wanted to communicate with each other? Most people would conclude today that they would use some wireless protocol with some authentication protocol link up and pass data back and forth. Imagine it they could communicate more ubiquitously through the visual spectrum.