business intelligence

IBM and The Weather Channel launch detailed local COVID-19 maps and data tracking

There are already a number of resources available for mapping the spread of confirmed COVID-19 cases both in the U.S. and globally, but IBM and its subsidiary The Weather Company have launched new tools that bring COVID-19 mapping and analysis to more people via their Weather Channel mobile app and

Existing tools are useful, but come from fairly specialized sources including the World Health Organization (WHO) and Johns Hopkins University. This new initiative combines data fro these same sources, including global confirmed reported COVID-19 cases, as well as reported data from sources at both the state and county level. This is collected on a so-called “incident map” that displays color-coded reported case data for states and counties, as well as on state-wide trend graphs and through reporting of stats including relative percentage increase of cases week-over-week.

On top of these sections built into the core, consumer-facing products, IBM has also launched a Watson and Cognos Analytics tools, are intended for use by both researchers and public officials – but they’re also meant for general public consumption. IBM is also providing resources including fact-checking resources and practical guidance for both COVID-19 patients and the general public, to help not only inform people about the spread of the virus, but also the steps they can take to protect themselves and others.

One of the key elements of COVID-19 mitigation is making sure that the average American has access to reliable and accurate information, including the most up-to-date guidelines about social distancing and isolation from trusted experts including the WHO and the Centers for Disease Control and Prevention (CDC). That makes this a key resource in the ongoing efforts to curb the spread of the coronavirus, since it resides in an app that is among the most popular pieces of software available for smartphones. There are around 45 million or so monthly active users of the Weather Channel app, which means that this information will now be readily accessible by a large percentage of the U.S. population.

Startup malaise, startup ambition

Recapped. Layoffs. Slowdown. CEO transition. Budget cuts. Downsizing.

In spite of a spate of massive startup exits the last few months, culminating in fintech’s shining moment yesterday with Intuit’s $7.1 billion acquisition of Credit Karma, it’s been a tough period for the startup world. Layoffs abound, centered perhaps on SoftBank’s Vision Fund portfolio but hardly exclusive to it. Startups, both infamous and unheard of, are shutting their doors. And that doesn’t even being to factor in the global macro concerns like coronavirus that will drive investor sentiment this year.

There’s a bit of a malaise underway in the startup world, a sense that possibilities are closing, that everything that will be built has been built, that tech itself is under an excruciating microscope by the public that makes innovation impossible.

All of that may well be true. And yet, there remains so, so much more to get done.

Whole sectors of the economy still need to be completely rebuilt from the ground up. Health care is barely digital, never personalized, and based on almost no evidence or data whatsoever. Construction costs for housing and infrastructure have skyrocketed, with almost no real benefit to the end user whatsoever. Millions of people are facing student debt crises, and yet our school system doesn’t look all that much different from a century ago.

Climate change itself is going to eat away at more and more of the planet, just as several billion more people come online, join the industrial and knowledge economies, and demand the same amenities offered in the developed world. How do we offer air conditioning, housing, transportation, health care, and more to every human on the planet? We need to 100x the global GDP while cutting carbon emissions, and billions of people are counting on us.

Within organizations, we are still just beginning to figure out how design, data, and decisions work together to drive product innovation and growth. I just wrote about a prototyping tool yesterday, following up on my colleague Jordan Crook’s look at what has been happening in the design world. Yes, the tools are getting better, but what would happen if a million more people could effortlessly design? Or what would happen if billions of people had access to no code platforms more broadly? What could we empower them to create?

Or just take our general experience with digital products. Our phones are faster, the photos they take are at exquisite resolutions, and their svelte materiality remains superb. But do they really offer a seamless experience? I am still syncing files, tracking emails, attempting to connect a lunch meeting to my calendar and not dropping the details while flicking my fingers back and forth. The mundane nature of our daily software usage belies the reality that we use ridiculously elementary tools compared to what is possible even with today’s technology, no hand waving required.

And then there is data. The data revolution in business, entertainment, government, and more is barely in its infancy. Data may be slushing around large enterprises, but it hardly makes a dent on decision-making, even today. What would happen if we could use data more effectively? What if we could explore data even faster than today’s clunky BI tools? What if the best patterns for exploring data were readily available to every single person on Earth? What if we could instantly and easily build best-of-breed AI models to solve even our simplest decision-making problems?

I could go on for pages and pages. From specific markets, to the dynamics within communities, and societies, and companies, to the end users and the products they are offered, we are nowhere near the end of the innovation cycle. This isn’t Detroit circa a century ago, when hundreds of auto manufacturers and related companies eventually combined into a handful of today’s behemoths. There is still so much to do, and FAANG can’t do it all.

What’s crazy is that within the right circles, there has never been a wider sense of awe at the gap between what we know to do and what we know we need to do. There are so many unsolved challenges today worth exploring that could not only help the lives of tens of millions of people, but that could also be multi-billion dollar economies themselves.

And so we need to bifurcate our sentiments. We do need to memorialize the failed startups, the ambitions that never quite made it. We need to recognize when mistakes are made, and have empathy for those affected by them. We shouldn’t ignore the negative news of our industry at all lest we repeat the same blunders.

Yet, a positive sentiment in the face of this avalanche of negative news and critical analysis is vital. You have to keep your eye on the future, on the change, on the power that still rests with all of us to make a difference right now. So much needs to be done, and the day is still young.

Databricks makes bringing data into its ‘lakehouse’ easier

Databricks today announced that launch of its new Data Ingestion Network of partners and the launch of its Databricks Ingest service. The idea here is to make it easier for businesses to combine the best of data warehouses and data lakes into a single platform — a concept Databricks likes to call ‘lakehouse.’

At the core of the company’s lakehouse is Delta Lake, Databricks’ Linux Foundation-managed open-source project that brings a new storage layer to data lakes that helps users manage the lifecycle of their data and ensures data quality through schema enforcement, log records and more. Databricks users can now work with the first five partners in the Ingestion Network — Fivetran, Qlik, Infoworks, StreamSets, Syncsort — to automatically load their data into Delta Lake. To ingest data from these partners, Databricks customers don’t have to set up any triggers or schedules — instead, data automatically flows into Delta Lake.

“Until now, companies have been forced to split up their data into traditional structured data and big data, and use them separately for BI and ML use cases. This results in siloed data in data lakes and data warehouses, slow processing and partial results that are too delayed or too incomplete to be effectively utilized,” says Ali Ghodsi, co-founder and CEO of Databricks. “This is one of the many drivers behind the shift to a Lakehouse paradigm, which aspires to combine the reliability of data warehouses with the scale of data lakes to support every kind of use case. In order for this architecture to work well, it needs to be easy for every type of data to be pulled in. Databricks Ingest is an important step in making that possible.”

Databricks VP or Product Marketing Bharath Gowda also tells me that this will make it easier for businesses to perform analytics on their most recent data and hence be more responsive when new information comes in. He also noted that users will be able to better leverage their structured and unstructured data for building better machine learning models, as well as to perform more traditional analytics on all of their data instead of just a small slice that’s available in their data warehouse.