As more software companies turn their focus on healthcare, Product Managers who are new to the industry will notice the intricacies that make building, launching, and maintaining a product in healthcare different. Healthcare is one of the most unique industries in terms of regulation, data sensitivity, and social impact, which pushes product teams to take novel approaches to typical “playbooks” that exist. While healthcare’s strict privacy rules, high costs, and siloed software systems can make it seem like an unattractive industry to work in, that is just why more of us need to help. The outcomes of improving anything from health data analysis to back office efficiencies all impact care in some way, and can ultimately improve the health of our society. That is the ultimate pay off. I hope to provide a little bit of insight into some challenges you’ll face as a PdM in healthcare, preparing you to build healthcare products for the first time.
Note: My experience is primarily in building products for hospitals in the context of registration, patient flow, and back office operations.
Secure environments and sensitive data
Hospitals take security very seriously. Everything from how the building is organized, to where their patient data is stored, to privacy agreements with patients create a complex security apparatus that makes it difficult to obtain the typical data sets about your users that you’re accustomed to. For example, you may have trouble finding a usage analytics tool/framework that works for you and your customers. In many cases, simply using Google Analytics will not be viable out-of-the-box, especially without a BAA in place with Google. Although it may be unintentional, there is a potential that some of the monitoring may pick up PHI in certain events, and even if that is cleared on your end, hospital security teams may have issues with that data going to Google. In more secure settings, hospitals may even want you to operate in an on-prem/off-net setting, eliminating your ability to use conventional tools to collect usage data in the cloud. This challenge is both a technical and communication hurdle.
Be proactive about all things security and data. Setting up environments and analysis tools in non-compliant ways will only cause more pain transitioning later and when having conversations with potential customers. When it comes to collecting usage data, consider using a hosted solution (piwik), or a home-grown solution if needed, and do it early. If you want to get metrics that are based on the PHI (Protected Health Information) that you’re collecting, this will require more secure analysis infrastructure (for instance we have a separate HIPAA compliant, AWS cloud with Zeppelin to do analysis), so plan ahead. I also cannot understate the value of in-depth security documentation describing your architecture/data collection practices. This will save you hours of phone calls with security teams and make you seem more legitimate to potential customers.
Complex and siloed health data
If you’re new to healthcare and plan to build something that leverages patient data, be prepared to spend a significant amount of time learning about the different data protocols (HL7, FHIR, etc) and terminology sets (SNOMED, LOINC, CPT, DRG, ICD, etc). These standards are in place to classify and label information, however many of these are not used in standard ways, creating issues for non-experts when trying to leverage the data. Connecting to sources of this raw data is often difficult, and each hospital will use slightly different system configurations and prefer a different connection method. For instance, some hospitals that want to leverage HL7 will have to pay upwards of $10,000 to set up an interface for your product that will allow HL7 to flow bidirectional. Also note that these are some of the challenges just to read this information. When writing back to a patient’s record, you’ll need to be even more careful managing patient data.
Besides establishing analysis environments early, its important to define your end goal for your data analysis upfront. This will help you focus on the specific data types that you really need to understand and will give you the capacity to dive deep. For instance, if you want to do analysis on different diagnoses that patients have and your data is coming via HL7, you’ll need to first learn to parse the HL7 to the correct segment, then to decipher the content in that segment (likely an ICD code for consistency). Knowing where you want to end up will guide you to work through learning these different conventions one at a time, and to not overwhelm your team with the dozens of other terminology sets at once.
Live Feedback Nurses,
doctors, and anyone working in a hospital setting for that matter, are extremely busy (not to mention, there aren’t enough of them). They are also working directly with patients or with patient data throughout the day, so it can be challenging to find a time and place to meet with a user/potential user for feedback. Product teams will have to do a little extra work to get feedback from these specialized users/customers.
As with any product, there is no substitute for getting out in the field, but with healthcare specifically, this may be the only time you get with your target users/customers. Spend as much time as you can at hospitals with customers or prospective customers, asking questions, observing, and meeting with as many departments as you can. Go on sales meetings, installations, or even exploratory conversations if you can get them. Although we all have our own experience in healthcare settings, being there for research, not as a patient, will give you a different perspective. I personally found this to be a little awkward at first, but the insights and empathy for your users that you’ll gain is invaluable. Always be cognizant of the fact that “once you’ve seen one hospital, you’ve seen one hospital;” they all operate a little differently, and you’ll gain new insights from each conversation.
Validating New Product Ideas
It’s hard to validate B2HC (Business to Healthcare) ideas with traditional B2C techniques. Hospitals are hesitant to try “just any” software they find on the internet, and it’s tough to reach the right audience with a simple landing/sign-up page to gauge interest. To make any progress, you’re going to have to get more hands-on.
First, if you have any existing customers, leverage any time with them to validate your new ideas. For everything from sales decks to demos, you existing customers will be a great starting place for validation. When presenting a new concept in a healthcare setting, whether it’s a mockup or a prototype, I recommend making it feel as real as possible. No need to lie about progress or waste too much time on working software, but presenting something that seems incomplete may limit your ability to gain traction in your conversation with a risk-averse healthcare customer. We’ve used many techniques from Ash Maurya’s “Running Lean” over the years, particularly the problem and solution interviews.
Always have the patient in mind. A fair amount of my time has been building products or features that a patient never interacts with. Our customers typically evaluate the ROI of these products by how they reduce costs in some way. In the end this eventually translates to better patient experience and outcomes. Remember that at some point, everyone is a patient. Focus on the end goal of health.
As a Product Manager, you own the decisions made about your product. This doesn’t mean that you are always actually making choices, but that you are responsible for facilitating the making of decisions and are held responsible for their outcomes. While the outcome of a decision is the typical measure of success, I believe that PdMs should also focus on how quickly they can take advantage of an opportunity. In the fast-paced landscape of technology companies, especially startups, taking weeks or even days to make a decision could be long enough to render even a “perfect” decision (probably doesn’t exist) useless. All too often, to a fault, analysis is valued over execution. In this blog, we’ll explore how to leverage action to make better Product decisions.
To illustrate how overanalysis can cause a missed opportunity, consider the following scenario. You want to improve conversion on your website as a new competitor is beginning to take some of your potential users. You think you can make-up ground by improving the usability of the sign-up flow, and want to take some time to analyze the current sign-up data, do some user groups, run a few A/B tests, and optimize your site from there. While this sounds like a fine plan, PdMs need to be cognizant of the time they’re spending on optimization and planning in comparison to the magnitude of the decision. Doing user groups and inconsequential A/B tests over a matter of months may lead to a solution capable of doubling conversion rate, but at that point your competitor may have already swayed a bulk of your potential users. Also, how valuable are those acquired users in the end anyways? If you were able to make a slightly less informed decision that resulted in a slightly worse conversion rate improvement, but in half of the time, the end result would likely be a gain in users over your competitor. PdMs need to constantly evaluate the importance and impact of decisions with the amount of time and energy required to make it.
“This reveals something counterintuitive about decision making: your goal shouldn’t be to always make the right decision, it should be to invest the right amount of time in a making a decision relative to its importance.” – Brandon Chu, Making Good Decisions as a Product Manager
To make more effective product decisions, I recommend trying out the following techniques:
Keep the end goal in mind
Sometimes it’s easy to get caught up in the details of a problem, forgetting what the actual end goal is. It’s helpful to take a moment to slow down and identify what the end goal is, which is usually larger/broader than the task at hand, and make sure you’re not getting hung up on details that will only have a minor impact. To do this, try increasing the visibility of your end goal, and tracing all changes/initiatives/tests back to that goal, so that your team always has their eye on how their work relates to the bigger picture. A good Product team will call things out if they feel out of line.
Plan to learn along the way
No decision will ever be perfect. Expecting to fail and learn (in a controlled way) along the way will put you in the mindset of execution, rather than endless analysis. This comes down to managing the expectations of your manager, team, and yourself. Knowing upfront that you’re going to take some chances makes acting on those chances earlier easier. Instead of building out a plan, which includes knowing the answers or delaying a decision, break the decision up into smaller, prioritized decisions and start at the top. Working through this list will allow you to make better smaller decisions that are less risky and daunting, allowing you to learn faster. Having this expectation at the start will take the pressure out of the situation.
Doing something (within reason) is better than doing nothing
Action typically outweighs idleness when faced with a product decision. This feeds the learning mindset, pushing you to take steps towards your goal, even if they are not 100% the right steps. The key to managing this risk is to keep the steps small, and to tackle lower-risk problems with less information while spending more time on higher-risk decisions. Those small steps will compound into leaps of positive progress, and you will have made them quickly. One of our company core values at CrossChx is “excuses are for losers”, so every time I’m faced with a decision where there doesn’t seem to be a clear path forward, I remind myself that there is always something that can be done. There is typically no excuse to do nothing—there is always something that can be learned.
Being at the center of products, PdMs own the result of all decisions made. To effectively make decisions, Product teams typically employ a number of frameworks, but can get bogged down in the perfect solution. To take full advantage of opportunities, PdMs should consider the timeliness of their decision heavily against the potential impact of their decision. Making the right decision too late isn’t valuable to you or your customer.
Afterword: While searching to see how other PdMs feel about this topic, I stumbled across Brandon Chu’s blog Making Good Decisions as a Product Manager (quoted above). I cannot agree more with his post and recommend reading his post if you want a deeper dive on the mechanics of this sort of decision making.
To me, the role of Product Management is to identify opportunities presented by users, customers, or the market, and then to capitalize on that opportunity by leveraging the collective capabilities of the company (by building software, designing a shoe, offering a service, etc). Notice that Product is merely working with other divisions to accomplish the ultimate goal of seizing opportunity. While Product Managers (PdMs) tend to be intelligent, well-rounded people, they are not experts in all areas required to make a product successful, nor should they be.
Steve Jobs famously said “It doesn’t make sense to hire smart people and then tell them what to do; we hire smart people so they can tell us what to do.” It’s typically not Product’s job to hire the entire team, but PdMs should follow the same sentiment when working across the company. The PdM won’t necessarily have direct “authority” (in terms of reporting structure) over engineers, marketers, or designers but will need to leverage all of their skill sets and input in order to make their product initiative a success. Enter the need to be able to lead without authority.
Before diving into how to effectively lead without authority, it’s worth highlighting a few day to day examples of why Product must lead without authority. At the highest level, Product may need to rally people from several teams behind an idea or opportunity. Perhaps there is an opportunity for much needed revenue or an idea for a new feature that a PdM feels strongly about, they will need to gain buy-in from people across the company, above and below them, to drive their initiative to success. Once an initiative is off and running, PdMs play a key role in motivating all parts of the team to execute and maintain a consistent cadence. While this is also a shared responsibility by those involved, the PdM should keep the team focused and engaged to move the project forward. Finally on a more micro level, PdMs help lead and facilitate smaller decisions on a daily basis. For anything from a design change to an adjustment in priorities, the PdM will need to help ensure proper due diligence is done and provide insight/direction from their point of view.
Given this context, I believe these are three of the top techniques for leading without authority:
Evangelize your vision and desired outcome
In order to lead a team to achieve your product goals, you must provide your view of the future first. I like to think of this in the form of a vision, showing how you see this particular plan playing out, and in the form of a desired outcome, providing a more tangible end goal. The former is not a prescription of how the product/initiative must look in the end, but a story about the potential of the initiative, which should energize the team to participate. The latter is more solid; it gives everyone a tangible metric or goal to work toward and gives context to why the project is important. This will become more important in point two.
As you convey your vision, try to tell a story that will resonate with your audience and that goes past the conclusion of current project. You want your team to be energized by the impact that this project will have and where it will take the company in the future.
When providing a desired outcome, I find it best to accompany it with solid reasoning and data about why it is the right thing to “chase” right now. Many times, this ties back to a company goal, tactic/strategy, or vision, and it’s important for everyone you’re working with to understand that.
Leave it to the experts
As we’ve already noted, it’s most effective to allow experts in each area to own decisions in their particular domain. That is why point one is so important. It allows the PdM to think deeply about the vision and desired outcome for a given initiative, and then leverage the collective and individual expertise of each team member to make decisions in their space. A PdM that tries to dictate solutions or approaches in others’ areas of expertise will likely make suboptimal decisions and sour relationships along the way.
To do this effectively, present your end goal to each stakeholder, and then ask a lot of questions about how they think they’ll help achieve the goal from their perspective. Try to understand what they see as critical, then help them formulate a plan to achieve that. The most important point here is to put your faith in their decision, while making sure everyone knows how all of the pieces fit together.
Celebrate the hard work
After all of the releases, campaigns, tests, cold calls, and mockups the product has achieved its desired outcome or at least you’ve learned a ton along the way. Along that journey it’s important to celebrate the team’s accomplishments but even more important to put the emphasis on those that you’ve worked with. After all, they likely did majority of the leg work and they typically aren’t the first to be recognized when others think of the product or initiative. Showing this genuine praise will help you gain trust from those on your team, and will attract others in the organization to want to work with you on future projects.
Praise can come in a lot of ways. It could be giving someone credit in a meeting with other leaders, a post in your #general Slack channel, or an explicit callout at an all company meeting. As long as it’s authentic, praising your team’s work will return dividends when you’re in the trenches together. Honorable mention for this point: get to know your colleagues on a personal level and don’t be an jerk.
While leading without authority is a less technical Product Management skill, I contend that it is the most important skill for a PdM to have. So much of the job depends on coordination across the company (which cannot be made up for with intellect or time), that it’s hard to make progress without it. Experiment around with these techniques to see what works with your leadership style; you’ll know you’re on the right track when you no longer feel like you’re fighting for control.
As technologies have rapidly improved and advanced in their cognitive abilities, humans have willingly deduced themselves to manual routers of information. But wait… weren’t computers supposed to do that? I want us all to change the way we think about AI in order to build communities, not walls. AI should be used to scale human capacity rather than replace human necessity, and work side-by-side with healthcare employees of all levels and specializations.
I spoke at the “Machine Learning and AI for Healthcare” event at HIMSS a few weeks ago and because there wasn’t any video footage I wanted to at least share my thoughts on AI in this blog post.
I truly believe that it will require all of us to solve the problems that exist within healthcare by changing the way we think about AI in order to build communities instead of walls. That’s why I titled this “Integrate with Nothing, Outegrate with Everything.” I know what some of you are thinking. Outegrate isn’t even a real word. Let me tell you from personal experience, you can’t be a successful startup without having the ability to dress funny, use words like “disrupt,” “lean,” and “unicorn,” and lastly you can’t be a successful startup without having the ability to make up words once in a while.
Some of you may remember CrossChx as the startup that brought a DeLorean to HIMSS two years ago, with our “lobby of the future” concept. We were focused on solving the problems around identity and data resolution. Based on the success that came from creating a unique global patient ID, our customers requested our help to further reduce costs that are incurred by the stress points associated with manual processes that result in claim rejections, denials, no shows, and more. We knew we had an even bigger opportunity to solve massive problems in healthcare with AI. What we believe is that for AI to be truly useful, it must be indistinguishable from a human and that AI should do what machines do best — leaving the tasks that require creativity, empathy, and passion to people.
Now that you know a little bit about our story I want to set the tone for why I believe what I believe by sharing my personal story.
I was born in India to Aviraj and Sureka, and I have an older brother named David. If you’re Indian, or know someone who is, you know that there are a plethora of people in Indian families. Some that you know, and some that you will never meet, but that’s just the way it is. Until I was six years old, there were twelve of us living in four bedrooms, sharing one kitchen and one bathroom all on the bottom floor of an apartment building. When we immigrated to the United States in 1991, I met nearly a hundred more family members and we all lived within a ten mile radius of each other in the suburbs of Washington, D.C. The biggest lesson I learned throughout my childhood was that community matters. There was always a reason to celebrate…Thanksgiving and Christmas? Let’s get a hundred people together and pack them into the smallest place possible. You got a new pair of shoes? Great! Let’s celebrate that by going out to eat!
Historically, humans were centered around family and communities.
Generation after generation lived together under one roof, and when families did live separately, they never moved very far. We have since become a more individualistic culture. We walk around all day with our heads buried in our phones. We rely on ourselves. We live far away from where we were raised. Our connections with other people sometimes take place most often in the workplace.
It seems as though some of the technologies within healthcare have succumbed to the same mindset.
Now there are plenty of inventions that have helped us connect what was once disconnected. Think about electricity, the automobile, railways, mobile telephones, Facebook, and the list goes on. On the flip side we also know that there are plenty of technologies that do not help us become more connected. Healthcare, for example, has made significant investments into EHR’s, email, phone, chat, analytics, patient portals, and revenue cycle tools. Not to mention the time spent on hiring and training staff on these applications at disparate parts of an organization.
Instead of these technologies existing within a cohesive community that benefits both the patient and provider, they have built walls.
These are pervasive problems that I’m sure each of us have to deal with on a daily basis, but imagine what that must feel like for healthcare workers around the world. You could walk around most healthcare trade shows and slap vendor logos on each one of these walls. Most technologies require expensive integrations and the client is stuck paying the bill and allocating their IT staff to complete each of these projects.
“The sheer volume of healthcare data and the industry’s inability to tap its potential adds up to more than $300 billion annually in wasted value. Add to that compliance with federal privacy laws, and it’s no wonder patient care is often a mess. Multiple health data sources keep information such as clinical, financial and operational data siloed and separated, a problem that’s compounded by each data system’s unique validation rules, formats and key identifiers. With different databases and software systems holding different subsets of data, it’s difficult to get a complete picture of a patient — so accurate analysis of all that information is tough to do.” — The McKinsey Global Institute.
Because of the lack of interoperability with these technologies, human employees such as clerical staff, nurses, and even doctors, have become the routers of information.
Transferring data from one system to another. Typing and clicking their way through routinizable tasks, and spending less time focused on direct-patient care. Recent studies have shown disturbing trends in physicians spending nearly 50% of their time working within the EHR and other technologies that they use. Aside from being stressful and tedious, clerical processes are naturally prone to human errors– errors which can ultimately cost organizations tens of thousands, if not hundreds of thousands of dollars each year.
So what can AI do to help scale human capacity rather than replace human necessity?
Let’s begin by helping hospitals and health systems optimize operations. AI allows healthcare organizations to automate a variety of tasks including eligibility, order management, prior authorizations, claims processing and more. AI doesn’t compromise the IT infrastructure you already have in place– it merely helps you run the tools you already have more efficiently.
Earlier I talked about the investments that healthcare organizations have made in a variety of technologies. When you hire someone, you don’t expect them to show up with their own applications to perform their duties. So why expect your AI to do the same? Instead, we should expect AI to adopt the tools that are already in place and use them to perform their duties just like any other human. Just like an employee, an AI solution can get an email address, EHR or system account, a VPN, and access to any other necessary tools that are essential to fulfilling an employees duties.
Because AI can work 24 hours a day, 7 days a week, and 365 days a year, it will empower healthcare workers to spend less time processing data and more time focused on direct-patient care.
There will be obvious benefits that will come from AI including cost reductions, increased revenue, time saved, satisfaction for staff and patients alike, but for the sake of the people on the front lines of healthcare I truly believe AI will help reduce burnout and help scale human capacity by allowing us all to focus on the things that can only be accomplished by the wonders of the human brain.
Let me give you a quick example of what is possible with AI. I recently went on a site visit to a rural hospital in Georgia that has implemented AI to handle their eligibility and order management process. This is a 50 bed hospital, that is more than likely the largest employer in the area, and they didn’t even flinch when they saw AI as an opportunity. Instead of the doom and gloom that can be associated with AI, they focused on the overall positives.
The entire leadership team there got together and decided to send their Director of Patient Access to Disneyland for the Quality Service course taught by the Disney Institute. She was able to come back and reallocate her staff to direct-patient care and customer service needs rather than focusing on the repetitive, high-volume processes that they used to handle. Now will everything be that easy? Maybe not, but it does give us a glimpse of what is possible.
Now, let’s talk about how implementing AI can help not only organizations, but actual patients. Imagine having hundreds of instances of AI live at health systems around the country. Let’s say that you are traveling across the country for a family reunion and while you’re eating at a rest stop you chip a part of your tooth. Now you don’t want to show up at this reunion with a chipped tooth so you text and AI solution and say “hey I chipped my tooth can you find me a dentist that I can fix it for me asap?” The AI solution replies back and says “there are five dentists in this area that take your insurance and since I am logged into their schedules. I can tell you that these are the available times which one would you like for me to schedule?” You text back ”2pm” and then the AI solution gathers all the information of dentists you’ve ever been to, puts it together, and inserts it into the system of the dentist that you are going to. There’s no need for forms, or eligibility checks. The necessary information and data is there. That is the type of future that we can provide to consumers and that is the type of future that I want to be a part of with AI.
Click Less, Care More.
We work in an industry that has historically been left behind in technological revolutions — so it’s incumbent on each of us to make a commitment to move our technologies and organizations forward with AI. Once we’ve delegated our data heavy tasks back to computers, we’ll empower our teams to stand back up, click less, and care more. Doctors, nurses, and even clerical staff will finally be able to give their full attention to creating communities within your health networks, without the stress of tasks that are better suited for machines. AI will work alongside of us — not instead of us — in every department, of every organization. When our human capabilities are augmented with AI, we can create communities instead of walls like we’ve always imagined. Integrations cause stress, confusion, and headaches… but outegrations can transform our facilities into innovation centers capable of transforming the way we provide healthcare.
If you would like to learn more about how CrossChx is helping hospitals and health systems around the country Outegrate with Everything by hiring Olive, our AI solution, please visit hireolive.com/outegrate
We are moving into a new world very quickly. A world where many of our coworkers will be digital. AI bots, or SCILBOTS as I call them, will become a meaningful slice of our current workforce and will provide the efficiency we, as humans, need to scale. Here’s the thing…how will we pay these bots?
We think we have an answer. Like electricity in your home consumes or is metered by kilowatt hours, our bots consume and are metered by kilobot hours. What is a kilobot? A kilobot is 1,000 weighted actions of an AI bot. Actions are the most discreet measurement of work of an AI bot. Imagine a “click” or “entering text” as a discrete action. More complex actions like using computer vision to recognize hand-writing is also discreet action, but since it’s a heavier lift it consumes kilobot hours at a higher velocity.
Similar to your home, when you plug in a night light it makes the kilowatt hour meter spin. When you plug in a refrigerator, it spins faster. When an AI bot is simply automating a task with RPA on a user interface the kilobot hour meter spins slowly. When an AI bot uses a neural net to make a complex decision, the meter spins faster.
We created this methodology for pricing AI bots because, frankly, nothing better existed. The current bot marketplace is full of unscalable economic models like charging per bot, per software seat, per license. We don’t think those methods are sufficient for the wave of AI we are about to encounter. We think organizations will treat their AI bot workforce like infrastructure. Like a utility. Similar to electricity. We wanted to create a way for companies to implement AI across every element of their business and have one number to calculate spend and one metric to understand ROI. Companies will soon look at their monthly or yearly kilobot hour consumption and compare that to the ROI they see from their AI bots. ROI will be measured in things like: increased efficiency, decreased errors, better customer experience, increased quality, and getting more out of their human workforce by scaling them from rote tasks to more sophisticated cognitive tasks.
If your organization doesn’t have an AI bot strategy, we can help. If you haven’t thought about how AI bots will augment your humans, we can help. The time is now to start integrating AI bots into your strategy and to start budgeting for kilobot hour consumption. If you wait more than 12 months, you’ll be behind the curve and you’ll probably be catching up with your competitors who have already started putting AI bots to work.
Our AI bot, Olive, has been hired by dozens of companies already. By next year, it will be hundreds. By the end of 2019, thousands. By 2020, there will be over a million AI bots working side-by-side with human workers.
We invented the kilobot hour, but we don’t expect to be the only company that adopts it. We welcome other companies that are building AI bots to consider the kilobot hour as their pricing model. We’ll all compete on rates and create a true competitive marketplace. This is a new market and it needs leadership now to create enduring economic models. We’re happy to be that leadership at CrossChx and thrilled to be part of inventing the future.
We imagine a future where scaling humans is commonly understood by business leaders and the AI bot workforce helps bring a new level of efficiency and modernity to clunky enterprises where humans are spending too much time doing things meant for machines. The first AC electric meter was invented just a year after Tesla created AC power. We’re following the same track and hope the world imagines the same future we do.
NCI, Inc. (“NCI”), a leading provider of advanced information technology (IT) solutions and professional services to U.S. Federal Government agencies, announced today that it has entered into an exclusive partnership with Columbus, Ohio-based CrossChx, Inc. Under the partnership, CrossChx will work with NCI to bring its artificial intelligence (AI) commercial capabilities to government customers, enhancing NCI’s quality of premier solutions while simultaneously creating new opportunities for NCI employees.
“We look forward to partnering with CrossChx to bring our customers an innovative new solution to increase efficiencies while delivering greater mission success,” said Paul A. Dillahay, president and CEO of NCI. “Our solutions will address human capital needs, especially for functions that allow people more opportunity to manage rather than manually operate those processes. This helps to remove elements of human error, boost workforce focus on solutions and innovation, and ultimately result in better outcomes for both our customers and employees.”
NCI’s AI capabilities will be unique in the market through a focus on scaling humans, an AI approach that uses continual machine learning and automated processes to build greater workforce and organizational results. Through this partnership, NCI intends to explore opportunities to improve the efficiency of its current operations, providing true interoperability and collaboration between customers, employees and AI systems.
NCI launched seven pilots in August 2017, utilizing the CrossChx AI platform to provide proof of concepts across areas such as fraud, waste and abuse, cybersecurity, machine-to-machine communication (M2M) and patient care coordination. NCI plans to deploy these capabilities in several customer environments beginning in 2018 and will have a patient care demonstration available at the HIMSS18 Conference and Exhibition in Las Vegas from March 5-9, 2018.
“CrossChx is excited for the opportunity to help NCI build out their AI capabilities,” said Sean Lane, co-founder and CEO of CrossChx. “We anticipate implementing AI solutions across all of NCI’s operations in order to help them increase speed and productivity for their clients. After finding immense success in helping healthcare facilities adopt operational AI with our solution Olive, we are eager to bring this technology to the federal government where we think it will make a significant impact.”
About NCI, Inc.:
NCI is a leading provider of enterprise solutions and services to U.S. defense, intelligence, health and civilian government agencies. The company has the expertise and proven track record to solve its customers’ most important and complex mission challenges through technology and innovation. With core competencies in delivering cost-effective solutions and services in areas such as agile digital transformation; advanced analytics; hyperconverged infrastructure solutions; fraud, waste and abuse; and engineering and logistics; NCI’s team of highly skilled professionals are expanding their portfolio to include game-changing technology offerings such as artificial intelligence for their government customers. Coupled with a refined focus on strategic partnerships, NCI is successfully bridging the gap between commercial best practices and mission-critical government processes. Headquartered in Reston, Virginia, NCI has approximately 2,000 employees operating at more than 100 locations worldwide. For more information, visit www.nciinc.com.
Founded in 2012, CrossChx is building operational artificial intelligence, which empowers humans to achieve more than ever before. Olive, the company’s AI solution, acts as the intelligent router between systems and data by automating repetitive, high volume tasks and workflows providing true interoperability for organizations. Headquartered in Columbus, Ohio, CrossChx has a mission to scale humans by allowing AI to operate existing systems and letting it do what machines do best. For more information, visit www.hireolive.com or email email@example.com.
Amanda Hall, 703-707-6677
Director, Corporate Communications
Joel Chakra, 301-792-1720
Head of Product Marketing
My mind races when I think of all the similarities between the past rise of computing and the current rise of artificial intelligence. It is amazing to me as I watch what seems to be the same story with different characters playing out all over again. Everything is obviously not the same but there are some really solid key tenet similarities. The biggest difference, I think, is the fact that most people are aware that it’s happening because they’ve seen how fast technology happened before with computing. At least I hope everyone is aware.
Enter stage left, IBM. Yep. Just like in the good ole’ 1960s. They had the lock on computing through their mainframe market. These giant, room sized machines produced magical outputs that would one day turn our future into a dystopian sci-fi novel. Right? Well it was hard to know because very few people actually saw them, or used them, or understood them. But they sounded very impressive and of course the computing technology certainly was. It was the mainframe that kicked off our destiny with computers. It was a glimpse into our future relationship with intelligent machines, but it wasn’t the mainframe that changed the world.
IBM has a new mainframe. They call it Watson and it is does AI. Have I ever seen one? Nope. Just on TV when it played Jeopardy. Is it big…probably. Expensive…you bet. Are lots of people allowed to program on it…nope. But wait…in all fairness, they do have the Bluemix application developer capability that exposes Watson skills. Sort of like a modern day IBM 5150. The 1981 5150 was IBM’s attempt to enter the PC market after Apple had sold 6M Apple II’s since 1977. The 5150 was the best example of IBM “shrinking” their mainframe capabilities and putting it in the hands of real people since a couple vaporware flops (not FLOPS) with the SCAMP. They sold about 100,000 units. Not bad, but not Apple. One thing it did succeed to do was get Microsoft its first big piece of market share. So…are we in 1982 with AI? You have Watson which is getting beat up like crazy right now for allegedly being all sizzle and no steak. You have a growing number of companies diving into the AI space. Similar in volume to how many computer were jumping into the PC market in the early eighties. Maybe 1982, but let’s unpack this some more.
Let’s talk about what made the PC so powerful and seminal in computing history. I think it was three things: they became accessible, relatable, and programmable. Wow. I just blacked out for a minute right there. That was genius.
Okay, so the PC became accessible. That means normal humans could get their hands on it. They could put it in their house without having to sell their kidney. Cool. It can be argued that the 1981 Sinclair ZX81 fit that mold. They were priced at $99 and sold 600,000 units. You could also argue of course that the 1977 Apple II was accessible. They sold 6M of them at a price of around $2,000. I’ll settle on accessible in early 1980’s.
Now let’s talk what relatable means. Relatable means you can use the PC for things that you do on a daily basis. Things at work, home or school. Things like writing papers, doing spreadsheets, or playing a sick video game. PC’s aren’t relatable when they are only used for arcane tasks. Likewise, AI isn’t relatable when it’s used for arcane tasks.
Finally, they became programmable. Not just customizable. Programmable. You could start to make them better and more powerful by creating tools for them. When humans could start making tools for computers to use (software) at scale, it changed everything. The more people with access to the computers, the more programmers were made and the more users of the tools the programmers made. It was and still is a powerful and virtuous cycle.
Let’s use those three elements to assess where AI is today. Maybe that will help us figure out which year we’re in as it compares to computing history.
We’ll start with accessibility. How many people have access to AI? Well…a whole bunch. I mean it’s kind of everywhere. But more specifically, I think we can confidently call Siri and Alexa AI, right? I mean, at least they satisfy some Turing qualities, and deep in the bowels of their code are some neat neural nets for learning and some other cool machine intelligence stuff. Amazon has sold about 8M Echos. They’re not the cheapest thing in the world but they’re not crazy expensive. I have one…but not two. So I’m going to say it’s at 1977 in terms of accessibility. It’s super important to note that besides these voice assistants, AI isn’t that accessible. It has a long way to go. Maybe the iPhone X will have an impact with its on board GPU.
Next, let’s look at how relatable we are with AI. Like above, the Alexas and Siris of the world are super relatable. However, most of the AI companies out there are more focused on things that aren’t so relatable. AI certainly hasn’t invaded our lives and impacted the routing things we do everyday. Given the arcane nature of most AI solutions out there, with the exception of Alexa and Siri, I’m going to say we’re in 1977 with relatability too.
Finally, let’s think about programmability. There are some tools out there. Tensor Flow, Tesseract, OpenCV, etc. are pretty available. It’s actually pretty straightforward to build a neural net. But man, those GPUs are freaking expensive. How are we supposed to have a million programmers building unsupervised learning when the can’t access the compute power? That’s a problem. That’s like a 1972 problem. Hurry NVIDIA. We need GPU in every computer stat. I also think their needs to be some new IDEs. We are in the early days here at AI software engineering from a tools perspective. I think this will happen quickly but we are still mid 80’s from that perspective. Final ruling: Libraries = 1998, Hardware = 1977, IDE = 1983. Average that and we can put programmability in 1986.
Now we average all three categories together and see where we are: Accessible = 1977, Relatable = 1977, Programmable = 1986. Average that out…and we are in 1980.
Welcome to 1980. The Apple II just rocked the world of personal computing and are about to hibernate again until 1998 when the come back from the dead with the iMAC. IBM is about to retaliate with the IBM 5150 to little fanfare. Underdogs Commodore and NEC is about to be IBM’s launch for the first half of the decade selling over 30 million machines. And this year, Tim Berners-Lee is about to invent hypertext. Get ready.
Wonder what this year in AI will look like and how many similarities there will be?
Once I figured out time was the most valuable resource on Earth, it changed the way I thought about everything.
At NSA I was part of a program where general concept was to get intelligence to the war fighters real-time at orders of magnitude faster than ever before. It was one of the most effective programs the intelligence community has ever seen, in my view, and it was fundamentally about reducing time and creating speed to insight. It was about giving time back to warfighters. It was about giving them the tools and information to make decisions faster.
My first company built tactical cellular networks for the military. It was a software company with some hardware design and great deal of systems engineering and solution architecting. You can describe the product we offered and the resulting soul of the company and culture in many different ways. But ultimately we arbitraged time. We built systems that provided faster intelligence. Faster answers. Faster information. If we could beat time by an order of magnitude then we were valuable.
When we started CrossChx, we wanted to get comprehensive health information about patients to doctors faster. We started by identifying patients faster. We gave time back to the registrars. We gave time back to the patients. We did this with a product called SafeChx.
We then focused on giving time back to patients and registrars by figuring out a smoother way for patients to sign in when they show up to the hospital. We called that product Queue.
Next we wanted to crush the time waste patients and providers experienced when they showed up to an appointment and had to fill out those dreaded paper forms. We thought patients shouldn’t have to keep filling out the same paper forms over and over and over again. We wanted their experience of checking in to a medical appointment similar to checking in to a flight. So we created an app called CrossChx Connect (still available on the app store) that let patients fill out their medical history and insurance information for them and their family one last time, and then share that information with any doctor or hospital they wanted. We wanted to give patients and healthcare providers their time back.
Now, I think we’ve hit the holy grail of time hacking. Over and over again in hospitals we saw repetitive, mundane tasks being done at extremely high volumes by humans that should be spending their time doing other things like talking to other humans (patients). We saw them doing things that an intelligent router or better interoperability or at least better software should have made obsolete years ago. As we peeled back the layers of the onion, we realized that these routine tasks were pervasive. In hospitals, 40% of the costs are attributed to employees that perform administrative tasks. And even with all that investment, these tasks are being done less than perfectly. Mistakes happen, there’s not enough time to do them all, things fall through the cracks, backlogs haunt every department…the list goes on and on. Most of the 5,000 hospitals in the country are struggling to exist. They are fighting razor thin margins and clerical errors or not being able to get to all the routine tasks makes survival even harder. We realized that we needed to fix this problem. We wanted to give humans their time back. To solve this problem, we created Olive. Olive is an employee. She logs into all the same software a human uses the same way a human does. She performs these high volume, repetitive, mundane, tasks just like a human does. However, she does it with ease. She never lets anything slip through the cracks. She never makes errors. She never gets sick or takes vacation. Olive is an artificial intelligence bot. She’s a SCILBOT, as I wrote about earlier. She breezes through hundreds…thousands of tasks with ease. She emails her boss at the end of the week summarizing all the work she accomplished and provides insights on things her boss should be paying attention to or suggests how to do things better.
We launched Olive in April 2017. She’s being adopted at an incredible rate with over 40 organizations hiring her as the time of this post. She’s truly a powerful tool and a much needed technology for healthcare. Think of all the time she is going to give back to humans. Olive will give more time back to humans than anything I’ve ever built before. Get ready humans. You’re going to have a lot more time to do a lot bigger things.
Imagine you have a robot with a great set of mechanical eyes. These eyes can see and interpret things in the physical world. One of the things this robot can see and interpret is software user interfaces. By looking at a screen, the robot can tell where the submit button is, where to hit like, and where to swipe left or right.
Now imagine this robot’s eyes can just as easily train on a 72-inch screen on a wall in your living room. And these eyes are so good, the robot can see every pixel. Now imagine that robot wants to know Wikipedia. I mean it wants to know ALL of Wikipedia. The robot could put Wikipedia up on that 72inch screen and read (very quickly) all the content, methodically clicking on every single link and turning over every single rock. This could work. The robot would learn all the information. But it may not be the most performant and efficient way to transfer this data. But let’s say it goes ahead and reads Wikipedia this way. How would it store the data? In what structure? Now imagine it can store the data by performing entity extraction on ingestion “as it reads it” and then form a massive entity graph of people, places, things, and concepts.
Okay so that created and interesting picture in your mind I hope. I hope you imagined a robot standing in front of a giant tv and reading all of wikipedia as fast as superhumanly possible and storing all the information in a beautiful graph, thus storing the data much like the human brain does.
With that picture still in your mind let’s stay with the cool graph brain but redo the way the robot ingests the data. Imagine instead we took all the Wikipedia data and turned into a bitmap across a 72inch tv. Now imagine that bitmap changed every .1 seconds. Imagine then how much faster the robot can ingest all that data. The robot, after staring at a screen for a few minutes, can ingest all of Wikipedia. If the robot just hooked up to wifi couldn’t we just zap all that data into the robot’s brain? Sure. That could work. As long as the robot could find and accept the data feed. But think about how humans ingest information. Through our senses. If we want to make Turing-like AI that closely resembles humans and can think with the same sophistication, shouldn’t we try to mimic the way a human ingests information? Maybe the data rates of a data feed over wifi is faster. But certainly seeing data through the eyes and hearing data through the ears is more ubiquitous. And isn’t it really just a matter of time until we figure out how to pass information faster through computer vision that we can today through wireless data?
What if two robots wanted to communicate with each other? Most people would conclude today that they would use some wireless protocol with some authentication protocol link up and pass data back and forth. Imagine it they could communicate more ubiquitously through the visual spectrum.
Sometimes I hear the question as to whether or not software makers will program obstacles for bots. My answer is that the ones who want to win will do exactly the opposite. In the near future, consumers, both enterprise and commercial, will start to expect if not demand accessibility for bots. Enterprises that have invested in an automation infrastructure will want to ensure that any new software they purchase is easily learnable and usable by their bot workforce. This means that any software that throws up roadblocks to bots will have a major disadvantage. The market will force software to adapt, and will start to evaluate ease of use by bots when deciding on software purchases.
Let’s take it a step further. Software will start coming with a bot user interface (and this ain’t gonna be APIs) similar to how software apps have a mobile version or are “responsive”. The new “responsive” will include the ability to easily serve up capabilities and functions for bots.
One step further, there will be a “jQuery” for bots. There will be a library of UI tools that developers will go to to design bot interfaces for their software. This new jQuery will not be focused on aesthetics—it will be focused on performance and packing as much functionality and accurate data transfer techniques using the UI as possible. This library of front-end tools will become the language of compatibility in the bot universe, and it will ensure that any bot can use advanced computer vision capabilities to see, learn, and use any piece of software.
So who’s gonna create the jQuery for bots?
Looking forward to the future of democratized AI.