Archives

Opinion

‘Conscience laws’ endanger patients and contradict healthtech’s core values

Recent laws allowing healthcare providers to refuse care because of conscientious beliefs and denying care to transgender individuals might not seem like an issue for the tech industry at first blush, but these types of legislation directly contradict the core values of health tech.

Arkansas Governor Asa Hutchinson last month signed into law S.B. 289, known as the “Medical Ethics and Diversity Act,” which allows anyone who provides healthcare services — not just doctors — to refuse to give non-emergency care if they believe the care goes against their conscience.

Arkansas is one of several states in the U.S. that have been pushing laws like this over the past several years. These “conscience laws” are harmful to all patients — particularly LGBTQ individuals, women and rural citizens — especially because over 40% of available hospital beds are controlled by Catholic institutions in some states.

While disguised as a safeguard that prevents doctors from having to participate in medical services that are at odds with their religious beliefs, these laws go far beyond that and should be repealed.

While disguised as a safeguard that prevents doctors from having to participate in medical services that are at odds with their religious beliefs, these laws go far beyond that and should be repealed.

“Non-emergency” service is open to interpretation

The Arkansas legislation is one giant slippery slope. Even beyond the direct effects that the law would have on reproductive rights and the LGBTQ community, it leaves open questions about the many different services that medical professionals could decline simply by saying it goes against their conscience.

Broadly letting healthcare providers decide which services they will perform based on religion, ethics or conscience essentially eliminates protections patients have under federal anti-discrimination regulations.

What constitutes an “emergency” to one doctor or EMT may be deemed a “non-emergency” by another. By allowing medical professionals to avoid performing some services, the bill can be interpreted as allowing anyone involved in the provision of healthcare services to avoid performing any kind of service, as long as they say they believed it wasn’t an emergency at the time.

The law also allows individuals to refuse to refer patients to someone who would provide the desired service for them. This places an undue burden on patients with physical or mental health issues and causes delays in treatment as the patient searches for an alternate provider. In cases of health and life-threatening issues, for example, women have been refused treatment at Catholic medical institutions and forced to ride to the closest emergency care center.

The health tech community is working to improve the health of all

The Arkansas law runs counter to the values of the businesses that are working hard to develop and improve medical technologies. Health tech startups at their core are fighting to provide more and better services to more patients — whether it’s by building platforms to make healthcare accessible to all, developing specific medical devices to improve the quality of service or researching new treatments and vaccines.

Imagine developing a vaccine for a global pandemic and then allowing doctors the right to refuse to administer it because it’s open to interpretation whether the virus represents an emergency to specific people. Or imagine a hospital pharmacist who deliberately tries to spoil hundreds of vaccine doses because of the conspiracy theories he believes. Laws like the one in Arkansas open up the healthcare system to abuse by conspiracy theorists, and it is already the case that many wellness providers are basing their advice and services on QAnon falsehoods.

The health tech community is not just developing medications and devices for patients whose beliefs are similar to their own. Equally, medical professionals should not be making it harder for people to get needed medical care based on personal feelings. On the contrary, the ultimate goal of health tech businesses and healthcare providers alike should be a singular focus on improving the quality of care for all.

“Medical ethics” and anti-LGBTQ laws are unethical

As the health tech community continues to work tirelessly to bring new solutions to the marketplace to improve the health of everyone, it must also stand against laws like this, which threaten to eradicate the important gains that have been made in enhancing the lives and health of patients.

The Arkansas law — and others like it — place the burden of finding appropriate care on the patient instead of on the medical community, where it belongs. These laws must be repealed.

Flawed data is putting people with disabilities at risk

Data isn’t abstract — it has a direct impact on people’s lives.

In 2019, an AI-powered delivery robot momentarily blocked a wheelchair user from safely accessing the curb when crossing a busy road. Speaking about the incident, the person noted how “it’s important that the development of technologies [doesn’t put] disabled people on the line as collateral”.

Alongside other minority groups, people with disabilities have long been harmed by flawed data and data tools. Disabilities are diverse, nuanced, and dynamic; they don’t fit within the formulaic structure of AI, which is programmed to find patterns and form groups. Because AI treats any outlier data as ‘noise’ and disregards it, too often people with disabilities are excluded from its conclusions.

Take for example the case of Elaine Herzberg, who was struck and killed by a self-driving Uber SUV in 2018. At the time of the collision, Herzberg was pushing a bicycle, which meant Uber’s system struggled to categorize her and flitted between labeling her as a ‘vehicle,’ ‘bicycle,’ and ‘other.’ The tragedy raised many questions for people with disabilities: would a person in a wheelchair or a scooter be at risk of the same fatal misclassification?

We need a new way of collecting and processing data. ‘Data’ ranges from personal information, user feedback, resumes, multimedia, user metrics, and much more, and it’s constantly being used to optimize our software. However, it’s not done so with the understanding of the spectrum of nefarious ways that it can and is used in the wrong hands, or when principles are not applied to each touchpoint of building.

Our products are long overdue for a new, fairer data framework to ensure that data is managed with people with disabilities in mind. If it isn’t, people with disabilities will face more friction, and dangers, in a day-to-day life that is increasingly dependent on digital tools.

Misinformed data hampers the building of good tools

Products that lack accessibility might not stop people with disabilities from leaving their homes, but they can stop them from accessing pivot points of life like quality healthcare, education, and on-demand deliveries.

Our tools are a product of their environment. They reflect their creators’ world view and subjective lens. For too long, the same groups of people have been overseeing faulty data systems. It’s a closed loop, where underlying biases are perpetuated and groups that were already invisible remain unseen. But as data progresses, that loop becomes a snowball. We’re dealing with machine-learning models — if they’re taught long enough that ‘not being X’ (read: white, able-bodied, cisgendered) means not being ‘normal’, they will evolve by building on that foundation.

Data is interlinked in ways that are invisible to us. It’s not enough to say that your algorithm won’t exclude people with registered disabilities. Biases are present in other sets of data. For example, in the United States it’s illegal to refuse someone a mortgage loan because they’re Black. But by basing the process heavily on credit scores — which have inherent biases detrimental to people of color — banks indirectly exclude that segment of society.

For people with disabilities, indirectly biased data could potentially be: frequency of physical activity or number of hours commuted per week. Here’s a concrete example of how indirect bias translates to software: If a hiring algorithm studies candidates’ facial movements during a video interview, a person with a cognitive disability or mobility impairment will experience different barriers than a fully able-bodied applicant.

The problem also stems from people with disabilities not being viewed as part of businesses’ target market. When companies are in the early stage of brainstorming their ideal users, people’s disabilities often don’t figure, especially when they’re less noticeable — like mental health illness. That means the initial user data used to iterate products or services doesn’t come from these individuals. In fact, 56% of organizations still don’t routinely test their digital products among people with disabilities.

If tech companies proactively included individuals with disabilities on their teams, it’s far more likely that their target market would be more representative. In addition, all tech workers need to be aware of and factor in the visible and invisible exclusions in their data. It’s no simple task, and we need to collaborate on this. Ideally, we’ll have more frequent conversations, forums and knowledge-sharing on how to eliminate indirect bias from the data we use daily.

We need an ethical stress test for data

We test our products all the time — on usability, engagement, and even logo preferences. We know which colors perform better to convert paying customers, and the words that resonate most with people, so why aren’t we setting a bar for data ethics?

Ultimately, the responsibility of creating ethical tech does not just lie at the top. Those laying the brickwork for a product day after day are also liable. It was the Volkswagen engineer (not the company CEO) who was sent to jail for developing a device that enabled cars to evade US pollution rules.

Engineers, designers, product managers: we all have to acknowledge the data in front of us and think about why we collect it and how we collect it. That means dissecting the data we’re requesting and analyzing what our motivations are. Does it always make sense to ask about someone’s disabilities, sex or race? How does having this information benefit the end user?

At Stark, we’ve developed a five-point framework to run when designing and building any kind of software, service or tech. We have to address:

  1. What data we’re collecting
  2. Why we’re collecting it
  3. How it will be used (and how it can be misused)
  4. Simulate IFTTT: ‘if this, then that.’ Explain possible scenarios in which the data can be used nefariously, and alternate solutions. For instance, how users can be impacted by an at-scale data breach? What happens if this private information becomes public to their family and friends?
  5. Ship or trash the idea

If we can only explain our data using vague terminology and unclear expectations, or by stretching the truth, we shouldn’t be allowed to have that data. The framework forces us to break down data in the most simple manner; and if we can’t, it’s because we’re not yet equipped to handle it responsibly.

Innovation has to include people with disabilities

Complex data technology is entering new sectors all the time, from vaccine development to robotaxis. Any bias against individuals with disabilities in these sectors stops them from accessing the most cutting-edge products and services. As we become more dependent on tech in every niche of our lives, there’s greater room for exclusion in how we carry out everyday activities.

This is all about forward thinking and baking inclusion into your product at the start. Money and/or experience aren’t limiting factors here — changing your thought process and development journey is free, it’s just a conscious pivot in a better direction. And while the upfront cost may be a heavy lift, the profits you’d lose from not tapping into these markets, or because you end up retrofitting your product down the line, far outweigh that initial expense. This is especially true for enterprise-level companies that won’t be able to access academia or governmental contracts without being compliant.

So early-stage companies, integrate accessibility principles into your product development and gather user data to constantly reinforce those principles. Sharing data across your onboarding, sales, and design teams will give you a more complete picture of where your users are experiencing difficulties. Later-stage companies should carry out a self-assessment to determine where those principles are lacking in their product, and harness historical data and new user feedback to generate a fix.

An overhaul of AI and data isn’t just about adapting businesses’ framework. We still need the people at the helm to be more diverse. The fields remain overwhelmingly male and white, and in tech, there are numerous first-hand accounts of exclusion and bias towards people with disabilities. Until the teams curating data tools are themselves more diverse, nations’ growth will continue to be stifled, and people with disabilities will be some of the hardest-hit casualties.

Reform the US low-income broadband program by rebuilding Lifeline

“If you build it, they will come” is a mantra that’s been repeated for more than three decades to embolden action. The line from “Field of Dreams” is a powerful saying, but I might add one word: “If you build it well, they will come.”

America’s Lifeline program, a monthly subsidy designed to help low-income families afford critical communications services, was created with the best intentions. The original goal was to achieve universal telephone service, but it has fallen far short of achieving its potential as the Federal Communications Commission has attempted to convert it to a broadband-centric program.

The FCC’s Universal Service Administrative Company estimates that only 26% of the families that are eligible for Lifeline currently participate in the program. That means that nearly three out of four low-income consumers are missing out on a benefit for which they qualify. But that doesn’t mean the program should be abandoned, as the Biden administration’s newly released infrastructure plan suggests.

Now is the right opportunity to complete the transformation of Lifeline to broadband and expand its utilization by increasing the benefit to a level commensurate with the broadband marketplace and making the benefit directly available to end users.

Rather, now is the right opportunity to complete the transformation of Lifeline to broadband and expand its utilization by increasing the benefit to a level commensurate with the broadband marketplace and making the benefit directly available to end users. Instead, the White House fact sheet on the plan recommends price controls for internet access services with a phaseout of subsidies for low-income subscribers. That is a flawed policy prescription.

If maintaining America’s global competitiveness, building broadband infrastructure in high-cost rural areas, and maintaining the nation’s rapid deployment of 5G wireless services are national goals, the government should not set prices for internet access.

Forcing artificially low prices in the quest for broadband affordability would leave internet service providers with insufficient revenues to continue to meet the nation’s communications infrastructure needs with robust innovation and investment.

Instead, targeted changes to the Lifeline program could dramatically increase its participation rate, helping to realize the goal of connecting Americans most in need with the phone and broadband services that in today’s world have become essential to employment, education, healthcare and access to government resources.

To start, Lifeline program participation should be made much easier. Today, individuals seeking the benefit must go through a process of self-enrollment. Implementing “coordinated enrollment” — through which individuals would automatically be enrolled in Lifeline when they qualify for certain other government assistance benefits, including SNAP (the Supplemental Nutrition Assistance Program, formerly known as food stamps) and Medicaid — would help to address the severe program underutilization.

Because multiple government programs serve the same constituency, a single qualification process for enrollment in all applicable programs would generate government efficiencies and reach Americans who are missing out.

Speaking before the American Enterprise Institute back in 2014, former FCC Commissioner Mignon Clyburn said, “In most states, to enroll in federal benefit programs administered by state agencies, consumers already must gather their income-related documentation, and for some programs, go through a face-to-face interview. Allowing customers to enroll in Lifeline at the same time as they apply for other government benefits would provide a better experience for consumers and streamline our efforts.”

Second, the use of the Lifeline benefit can be made far simpler for consumers if the subsidy is provided directly to them via an electronic Lifeline benefit card account — like the SNAP program’s electronic benefit transfer (EBT) card. Not only would a Lifeline benefit card make participation in the program more convenient, but low-income

Americans would then be able to shop among the various providers and select the carrier and the precise service(s) that best suits their needs. The flexibility of greater consumer choice would be an encouragement for more program sign-ups.

And, the current Lifeline subsidy amount — $9.25 per month — isn’t enough to pay for a broadband subscription. For the subsidy to be truly meaningful, an increase in the monthly benefit is needed. Last December, Congress passed the temporary Emergency Broadband Benefit to provide low-income Americans up to a $50 per month discount ($75 per month on tribal lands) to offset the cost of broadband connectivity during the pandemic. After the emergency benefit runs out, a monthly benefit adequate to defray the cost of a broadband subscription will be needed.

In order to support more than a $9.25 monthly benefit, the funding source for the Lifeline program must also be reimagined. Currently, the program relies on the FCC’s Universal Service Fund, which is financed through a “tax” on traditional long-distance and international telephone services.

As greater use is made of the web for voice communications, coupled with less use of traditional telephones, the tax rate has increased to compensate for the shrinking revenues associated with landline phone services. A decade ago, the tax, known as the “contribution factor,” was 15.5%, but it’s now more than double that at an unsustainable 33.4%. Without changes, the problem will only worsen.

It’s easy to see that the financing of a broadband benefit should no longer be tied to a dying technology. Instead, funding for the Lifeline program could come from a “tax” shared across the entire internet ecosystem, including the edge providers that depend on broadband to reach their customers, or from direct congressional appropriations for the Lifeline program.

These reforms are realistic and straightforward. Rather than burn the program down, it’s time to rebuild Lifeline to ensure that it fulfills its original intention and reaches America’s neediest.