Will Work for Kilobots

Will Work for Kilobots

We are moving into a new world very quickly. A world where many of our coworkers will be digital. AI bots, or SCILBOTS as I call them, will become a meaningful slice of our current workforce and will provide the efficiency we, as humans, need to scale. Here’s the thing…how will we pay these bots?

We think we have an answer. Like electricity in your home consumes or is metered by kilowatt hours, our bots consume and are metered by kilobot hours. What is a kilobot? A kilobot is 1,000 weighted actions of an AI bot. Actions are the most discreet measurement of work of an AI bot. Imagine a “click” or “entering text” as a discrete action. More complex actions like using computer vision to recognize hand-writing is also discreet action, but since it’s a heavier lift it consumes kilobot hours at a higher velocity.

Similar to your home, when you plug in a night light it makes the kilowatt hour meter spin. When you plug in a refrigerator, it spins faster. When an AI bot is simply automating a task with RPA on a user interface the kilobot hour meter spins slowly. When an AI bot uses a neural net to make a complex decision, the meter spins faster.

We created this methodology for pricing AI bots because, frankly, nothing better existed. The current bot marketplace is full of unscalable economic models like charging per bot, per software seat, per license. We don’t think those methods are sufficient for the wave of AI we are about to encounter. We think organizations will treat their AI bot workforce like infrastructure. Like a utility. Similar to electricity. We wanted to create a way for companies to implement AI across every element of their business and have one number to calculate spend and one metric to understand ROI. Companies will soon look at their monthly or yearly kilobot hour consumption and compare that to the ROI they see from their AI bots. ROI will be measured in things like: increased efficiency, decreased errors, better customer experience, increased quality, and getting more out of their human workforce by scaling them from rote tasks to more sophisticated cognitive tasks.

If your organization doesn’t have an AI bot strategy, we can help. If you haven’t thought about how AI bots will augment your humans, we can help. The time is now to start integrating AI bots into your strategy and to start budgeting for kilobot hour consumption. If you wait more than 12 months, you’ll be behind the curve and you’ll probably be catching up with your competitors who have already started putting AI bots to work.

Our AI bot, Olive, has been hired by dozens of companies already. By next year, it will be hundreds. By the end of 2019, thousands. By 2020, there will be over a million AI bots working side-by-side with human workers.

We invented the kilobot hour, but we don’t expect to be the only company that adopts it. We welcome other companies that are building AI bots to consider the kilobot hour as their pricing model. We’ll all compete on rates and create a true competitive marketplace. This is a new market and it needs leadership now to create enduring economic models. We’re happy to be that leadership at CrossChx and thrilled to be part of inventing the future.

We imagine a future where scaling humans is commonly understood by business leaders and the AI bot workforce helps bring a new level of efficiency and modernity to clunky enterprises where humans are spending too much time doing things meant for machines. The first AC electric meter was invented just a year after Tesla created AC power. We’re following the same track and hope the world imagines the same future we do.

In AI, Are We in 1977, 1980, 1998, or Beyond?

In AI, Are We in 1977, 1980, 1998, or Beyond?

My mind races when I think of all the similarities between the past rise of computing and the current rise of artificial intelligence. It is amazing to me as I watch what seems to be the same story with different characters playing out all over again. Everything is obviously not the same but there are some really solid key tenet similarities. The biggest difference, I think, is the fact that most people are aware that it’s happening because they’ve seen how fast technology happened before with computing. At least I hope everyone is aware.

Enter stage left, IBM. Yep. Just like in the good ole’ 1960s. They had the lock on computing through their mainframe market. These giant, room sized machines produced magical outputs that would one day turn our future into a dystopian sci-fi novel. Right? Well it was hard to know because very few people actually saw them, or used them, or understood them. But they sounded very impressive and of course the computing technology certainly was. It was the mainframe that kicked off our destiny with computers. It was a glimpse into our future relationship with intelligent machines, but it wasn’t the mainframe that changed the world.

IBM has a new mainframe. They call it Watson and it is does AI. Have I ever seen one? Nope. Just on TV when it played Jeopardy. Is it big…probably.  Expensive…you bet. Are lots of people allowed to program on it…nope. But wait…in all fairness, they do have the Bluemix application developer capability that exposes Watson skills. Sort of like a modern day IBM 5150.  The 1981 5150 was IBM’s attempt to enter the PC market after Apple had sold 6M Apple II’s since 1977. The 5150 was the best example of IBM “shrinking” their mainframe capabilities and putting it in the hands of real people since a couple vaporware flops (not FLOPS) with the SCAMP. They sold about 100,000 units. Not bad, but not Apple. One thing it did succeed to do was get Microsoft its first big piece of market share. So…are we in 1982 with AI? You have Watson which is getting beat up like crazy right now for allegedly being all sizzle and no steak. You have a growing number of companies diving into the AI space. Similar in volume to how many computer were jumping into the PC market in the early eighties. Maybe 1982, but let’s unpack this some more.

Let’s talk about what made the PC so powerful and seminal in computing history. I think it was three things: they became accessible, relatable, and programmable. Wow. I just blacked out for a minute right there. That was genius.

Okay, so the PC became accessible. That means normal humans could get their hands on it. They could put it in their house without having to sell their kidney. Cool. It can be argued that the 1981 Sinclair ZX81 fit that mold. They were priced at $99 and sold 600,000 units. You could also argue of course that the 1977 Apple II was accessible. They sold 6M of them at a price of around $2,000. I’ll settle on accessible in early 1980’s.

Now let’s talk what relatable means. Relatable means you can use the PC for things that you do on a daily basis. Things at work, home or school. Things like writing papers, doing spreadsheets, or playing a sick video game. PC’s aren’t relatable when they are only used for arcane tasks. Likewise, AI isn’t relatable when it’s used for arcane tasks.

Finally, they became programmable. Not just customizable. Programmable. You could start to make them better and more powerful by creating tools for them. When humans could start making tools for computers to use (software) at scale, it changed everything. The more people with access to the computers, the more programmers were made and the more users of the tools the programmers made. It was and still is a powerful and virtuous cycle.

Let’s use those three elements to assess where AI is today. Maybe that will help us figure out which year we’re in as it compares to computing history.

We’ll start with accessibility. How many people have access to AI? Well…a whole bunch. I mean it’s kind of everywhere. But more specifically, I think we can confidently call Siri and Alexa AI, right? I mean, at least they satisfy some Turing qualities, and deep in the bowels of their code are some neat neural nets for learning and some other cool machine intelligence stuff. Amazon has sold about 8M Echos. They’re not the cheapest thing in the world but they’re not crazy expensive. I have one…but not two. So I’m going to say it’s at 1977 in terms of accessibility. It’s super important to note that besides these voice assistants, AI isn’t that accessible. It has a long way to go.  Maybe the iPhone X will have an impact with its on board GPU.

Next, let’s look at how relatable we are with AI. Like above, the Alexas and Siris of the world are super relatable. However, most of the AI companies out there are more focused on things that aren’t so relatable. AI certainly hasn’t invaded our lives and impacted the routing things we do everyday. Given the arcane nature of most AI solutions out there, with the exception of Alexa and Siri, I’m going to say we’re in 1977 with relatability too.

Finally, let’s think about programmability. There are some tools out there. Tensor Flow, Tesseract, OpenCV, etc. are pretty available. It’s actually pretty straightforward to build a neural net. But man, those GPUs are freaking expensive. How are we supposed to have a million programmers building unsupervised learning when the can’t access the compute power? That’s a problem. That’s like a 1972 problem. Hurry NVIDIA. We need GPU in every computer stat. I also think their needs to be some new IDEs. We are in the early days here at AI software engineering from a tools perspective. I think this will happen quickly but we are still mid 80’s from that perspective. Final ruling: Libraries = 1998, Hardware = 1977, IDE = 1983. Average that and we can put programmability in 1986.

Now we average all three categories together and see where we are: Accessible = 1977, Relatable = 1977, Programmable = 1986. Average that out…and we are in 1980.

Welcome to 1980. The Apple II just rocked the world of personal computing and are about to hibernate again until 1998 when the come back from the dead with the iMAC. IBM is about to retaliate with the IBM 5150 to little fanfare. Underdogs Commodore and NEC is about to be IBM’s launch for the first half of the decade selling over 30 million machines. And this year, Tim Berners-Lee is about to invent hypertext. Get ready.

Wonder what this year in AI will look like and how many similarities there will be?

Hacking Time: The Journey to AI

Hacking Time: The Journey to AI

Once I figured out time was the most valuable resource on Earth, it changed the way I thought about everything.

At NSA I was part of a program where general concept was to get intelligence to the war fighters real-time at orders of magnitude faster than ever before. It was one of the most effective programs the intelligence community has ever seen, in my view, and it was fundamentally about reducing time and creating speed to insight. It was about giving time back to warfighters. It was about giving them the tools and information to make decisions faster.

My first company built tactical cellular networks for the military. It was a software company with some hardware design and great deal of systems engineering and solution architecting. You can describe the product we offered and the resulting soul of the company and culture in many different ways. But ultimately we arbitraged time. We built systems that provided faster intelligence. Faster answers. Faster information. If we could beat time by an order of magnitude then we were valuable.

When we started CrossChx, we wanted to get comprehensive health information about patients to doctors faster. We started by identifying patients faster. We gave time back to the registrars. We gave time back to the patients. We did this with a product called SafeChx.

We then focused on giving time back to patients and registrars by figuring out a smoother way for patients to sign in when they show up to the hospital. We called that product Queue.

Next we wanted to crush the time waste patients and providers experienced when they showed up to an appointment and had to fill out those dreaded paper forms.  We thought patients shouldn’t have to keep filling out the same paper forms over and over and over again.  We wanted their experience of checking in to a medical appointment similar to checking in to a flight.  So we created an app called CrossChx Connect (still available on the app store) that let patients fill out their medical history and insurance information for them and their family one last time, and then share that information with any doctor or hospital they wanted.  We wanted to give patients and healthcare providers their time back.

Now, I think we’ve hit the holy grail of time hacking.  Over and over again in hospitals we saw repetitive, mundane tasks being done at extremely high volumes by humans that should be spending their time doing other things like talking to other humans (patients).  We saw them doing things that an intelligent router or better interoperability or at least better software should have made obsolete years ago.  As we peeled back the layers of the onion, we realized that these routine tasks were pervasive.  In hospitals, 40% of the costs are attributed to employees that perform administrative tasks.  And even with all that investment, these tasks are being done less than perfectly.  Mistakes happen, there’s not enough time to do them all, things fall through the cracks, backlogs haunt every department…the list goes on and on.  Most of the 5,000 hospitals in the country are struggling to exist.  They are fighting razor thin margins and clerical errors or not being able to get to all the routine tasks makes survival even harder.  We realized that we needed to fix this problem.  We wanted to give humans their time back. To solve this problem, we created Olive.  Olive is an employee.  She logs into all the same software a human uses the same way a human does.  She performs these high volume, repetitive, mundane, tasks just like a human does.  However, she does it with ease.  She never lets anything slip through the cracks.  She never makes errors. She never gets sick or takes vacation.  Olive is an artificial intelligence bot.  She’s a SCILBOT, as I wrote about earlier.  She breezes through hundreds…thousands of tasks with ease.  She emails her boss at the end of the week summarizing all the work she accomplished and provides insights on things her boss should be paying attention to or suggests how to do things better.

We launched Olive in April 2017.  She’s being adopted at an incredible rate with over 40 organizations hiring her as the time of this post.  She’s truly a powerful tool and a much needed technology for healthcare.  Think of all the time she is going to give back to humans. Olive will give more time back to humans than anything I’ve ever built before.  Get ready humans.  You’re going to have a lot more time to do a lot bigger things.

Computer Vision — Ingesting Data Like a Human

Computer Vision — Ingesting Data Like a Human

Imagine you have a robot with a great set of mechanical eyes. These eyes can see and interpret things in the physical world. One of the things this robot can see and interpret is software user interfaces. By looking at a screen, the robot can tell where the submit button is, where to hit like, and where to swipe left or right.

Now imagine this robot’s eyes can just as easily train on a 72-inch screen on a wall in your living room. And these eyes are so good, the robot can see every pixel. Now imagine that robot wants to know Wikipedia. I mean it wants to know ALL of Wikipedia. The robot could put Wikipedia up on that 72inch screen and read (very quickly) all the content, methodically clicking on every single link and turning over every single rock. This could work. The robot would learn all the information. But it may not be the most performant and efficient way to transfer this data. But let’s say it goes ahead and reads Wikipedia this way. How would it store the data? In what structure?  Now imagine it can store the data by performing entity extraction on ingestion “as it reads it” and then form a massive entity graph of people, places, things, and concepts.

Okay so that created and interesting picture in your mind I hope. I hope you imagined a robot standing in front of a giant tv and reading all of wikipedia as fast as superhumanly possible and storing all the information in a beautiful graph, thus storing the data much like the human brain does.

With that picture still in your mind let’s stay with the cool graph brain but redo the way the robot ingests the data. Imagine instead we took all the Wikipedia data and turned into a bitmap across a 72inch tv. Now imagine that bitmap changed every .1 seconds. Imagine then how much faster the robot can ingest all that data. The robot, after staring at a screen for a few minutes, can ingest all of Wikipedia.  If the robot just hooked up to wifi couldn’t we just zap all that data into the robot’s brain? Sure. That could work. As long as the robot could find and accept the data feed. But think about how humans ingest information. Through our senses. If we want to make Turing-like AI that closely resembles humans and can think with the same sophistication, shouldn’t we try to mimic the way a human ingests information? Maybe the data rates of a data feed over wifi is faster. But certainly seeing data through the eyes and hearing data through the ears is more ubiquitous. And isn’t it really just a matter of time until we figure out how to pass information faster through computer vision that we can today through wireless data?

What if two robots wanted to communicate with each other? Most people would conclude today that they would use some wireless protocol with some authentication protocol link up and pass data back and forth. Imagine it they could communicate more ubiquitously through the visual spectrum.

Will software makers throw up roadblocks for bots?

Will software makers throw up roadblocks for bots?

Sometimes I hear the question as to whether or not software makers will program obstacles for bots. My answer is that the ones who want to win will do exactly the opposite. In the near future, consumers, both enterprise and commercial, will start to expect if not demand accessibility for bots. Enterprises that have invested in an automation infrastructure will want to ensure that any new software they purchase is easily learnable and usable by their bot workforce. This means that any software that throws up roadblocks to bots will have a major disadvantage. The market will force software to adapt, and will start to evaluate ease of use by bots when deciding on software purchases.

Let’s take it a step further. Software will start coming with a bot user interface (and this ain’t gonna be APIs) similar to how software apps have a mobile version or are “responsive”. The new “responsive” will include the ability to easily serve up capabilities and functions for bots.

One step further, there will be a “jQuery” for bots. There will be a library of UI tools that developers will go to to design bot interfaces for their software. This new jQuery will not be focused on aesthetics—it will be focused on performance and packing as much functionality and accurate data transfer techniques using the UI as possible. This library of front-end tools will become the language of compatibility in the bot universe, and it will ensure that any bot can use advanced computer vision capabilities to see, learn, and use any piece of software.

So who’s gonna create the jQuery for bots?

Looking forward to the future of democratized AI.

This is the next technology revolution

This is the next technology revolution

Pay attention. This is the next revolution. The best part is at the end of this article.

This is the thing I would bet it all on (and am). This is the one thing I know will happen. It’s inevitable.

We have spent the last 30-40 years as a humankind putting our work lives into, and onto, software. We have created millions of software applications, and have spent all of our work days migrating our world to them and living our work-life inside them.

We have been building the tools that will enable the next big revolution.

What does that mean?

Super-Connected: Think super in the context of superstructure. Think super as it pertains to being above the base layer. Think connections beyond a single software application. Think connections beyond a single enterprise. SCILBOTS will understand what other SCILBOTS have done at a complete and specific level. They will understand the continuum of what actions have occurred on any entity (think any person, place or thing like an account, customer, product, patient, etc).

Intelligent: SCILBOTS will have the capacity to understand, retain, and organize all the data associated with all the work they do with no exceptions. They will understand context and correlations. And since they are super-connected, they will have the capacity to share this knowledge.

Learning: The SCILBOTS will learn how to do their job better, together. They will start with basic capabilities of supervised machine learning on the left end of the machine intelligence spectrum, and humans will eventually build the cogitative capabilities for the SCILBOTS to advance into the realm of advanced unsupervised machine intelligence. With the power of human innovation that comes along with platform development (like the internet), this will happen faster than any of us can imagine.

So how will it work specifically? SCILBOTS will login to your existing software systems and do 90% of what you do today. And yes, they will do it better. And they should, we’re humans not routers. Our biggest challenge will be to figure out how to scale ourselves up with this new technology reality.

Where will all these bots come from? Developers, geeks, and creators will build hundreds of thousands of them on robust platforms designed specifically to build and deploy these SCILBOTS.

Stay Tuned. CrossChx is building SCILBOTS today and have already deployed them as employees in healthcare. We’ve built, Olive, our healthcare version of the SCILBOT on a homegrown platform. We’re going to release this platform to beta users this year to build their own SCILBOTS. Next year, we’re going to release this platform to the world. Here comes the revolution. Buckle up.