Archives

developer

BluBracket scores $6.5M seed to help secure code in distributed environments

BluBracket, a new security startup from the folks who brought you Vera, came out of stealth today and announced a $6.5 million seed investment. Unusual Ventures led the round with participation by Point72 Ventures, SignalFire and Firebolt Ventures.

The company was launched by Ajay Aurora and Prakash Linga, who until last year were CEO and CTO respectively at Auroa, a security company that helps companies secure documents by having the security profile follow the document wherever it goes.

Aurora says he and Linga are entrepreneurs at heart and they were itching to start something new after more than five years at Vera . While both still sit on the Vera board, they decided to attack a new problem.

He says that the idea for BluBracket actually came out of conversations with Vera customers, who wanted something similar to Vera, except to protect code.”About 18-24 months ago, we started hearing from our customers, who were saying, ‘Hey you guys secure documents and files. What’s becoming really important for us is to be able to share code. Do you guys secure source code?’”

That was not a problem Vera was suited to solve, but it was a light bulb moment for Aurora and Linga, who saw an opportunity and decided to seize it. Recognizing the way development teams operated has changed, they started BluBracket and developed a pair of products to handle the unique set of problems associated with a distributed set of developers working out of a Git repository — whether that’s GitHub, GitLab or BitBucket.

The first product is BluBracket CodeInsight, which is an auditing tool, available starting today. This tool gives companies full visibility into who has withdrawn the code from the Git repository. “Once they have a repo, and then developers clone it, we can help them understand what clones exist on what devices, what third parties have their code, and even be able to search open source projects for code that might have been pushed into open source. So we’re creating what’s called a we call it a blueprint of where an enterprise code is,” Aurora explained.

The second tool, BluBracket CodeSecure, which won’t be available until later in the year, is how you secure that code including the ability to classify code by level importance. Code tagged with the highest level of importance will have special status and companies can attach rules to it like that it can’t be distributed to an open source folder without explicit permission.

They believe the combination of these tools will enable companies to maintain control over the code, even in a distributed system. Aurora says they have taken care to make sure that the system provides the needed security layer without affecting the operation of the continuous delivery pipeline.

“When you’re compiling or when you’re going from development to staging to production, in those cases because the code is sitting in Git, and the code itself has not been modified, BluBracket won’t break the chain,” he explained. If you tried to distribute special code outside the system, you might get a message that this requires authorization, depending on how the tags have been configured.

This is very early days for BluBracket, but the company takes its first steps as a startup this week as it emerges from stealth at the RSA security conference in San Francisco. It will be participating in the RSA Sandbox competition for early security startups at the conference, as well.

PullRequest snags remote developer hiring platform Moonlight in case of startup buying startup

PullRequest, a startup that provides code review as a service, announced today that it was buying Moonlight, an early stage startup that has built an online platform for hiring remote developers. The companies did not share the terms.

Lyal Avery, founder and CEO at PullRequest, says that he bought this company to expand his range of services. “Our platform is at a place where we’re very confident about our ability to identify issues. We’re moving to the next phase of fixing issues automatically. In order to do that we have to have access to people producing code. So with the developers on our platform that are currently reviewers, as well as the Moonlight folks, we can start to fix the issues we identify, and also attach that to our learning processes,” Avery explained.

This fits with the company’s vision of eventually automating common fixes. It’s currently working on building machine learning models to facilitate that automation. Moonlight gives PullRequest access to the platform’s data, which can help train and perfect the Beta models that the company is working on.

Avery says his vision isn’t to replace human developers, so much as to make them faster and more efficient than they are today. He says that from the time a bug is found in website code to the time it gets fixed is on average about six hours. He wants to reduce that to 20 minutes, and he believes that buying Moonlight will give him more data to get to that goal faster, while also expanding the range of services from code review to issue remediation.

It’s fairly unusual for a startup that has raised just over $12 million (according to Crunchbase data) to be out shopping for another, but Avery sees buying small companies like Moonlight as an excellent way to fill in gaps in the platform, while offering an easier path to expansion.

Moonlight is a small shop with just two employees, both who will be joining PullRequest, but it has 3000 developers on the platform, which PullRequest can now access. For now, Avery says that the companies will remain separate, and Moonlight will continue to operate its own website under the PullRequest umbrella.

Moonlight is based in Brooklyn, and had raised an unidentified pre-seed round before being acquired today. PullRequest, which is based in Austin, was a member of the Y Combinator Summer 2017 cohort. It raised a $2.3 million seed round in December, 2017 and another $8 million in April, 2018.

Getting tech right in Iowa and elsewhere requires insight into data, human behavior

What happened in Iowa’s Democratic caucus last week is a textbook example of how applying technological approaches to public sector work can go badly wrong just when we need it to go right.

While it’s possible to conclude that Iowa teaches us that we shouldn’t let tech anywhere near a governmental process, this is the wrong conclusion to reach, and mixes the complexity of what happened and didn’t happen. Technology won’t fix a broken policy and the key is understanding what it is good for.

What does it look like to get technology right in solving public problems? There are three core principles that can help more effectively build public-interest technology: solve an actual problem, design with and for users and their lives in mind and start small (test, improve, test).

Before developing an app or throwing a new technology into the mix in a political process it is worth asking: what is the goal of this app, and what will an app do that will improve on the existing process?

Getting it right starts with understanding the humans who will use what you build to solve an actual problem. What do they actually need? In the case of Iowa, this would have meant asking seasoned local organizers about what would help them during the vote count. It also means talking directly to precinct captains and caucus goers and observing the unique process in which neighbors convince neighbors to move to a different corner of a school gymnasium when their candidate hasn’t been successful. In addition to asking about the idea of a web application, it is critical to test the application with real users under real conditions to see how it works and make improvements.

In building such a critical game-day app, you need to test it under more real-world conditions, which means adoption and ease of use matters. While Shadow (the company charged with this build) did a lightweight test with some users, there wasn’t the runway to adapt or learn from those for whom the app was designed. The app may have worked fine, but that doesn’t matter if people didn’t use it or couldn’t download it.

One model of how this works can be found in the Nurse Family Partnership, a high-impact nonprofit that helps first-time, low-income moms.

This nonprofit has adapted to have feedback loops from its moms and nurses via email and text messages. It even has a full-time role “responsible for supporting the organization’s vision to scale plan by listening and learning from primary, secondary and internal customers to assess what can be done to offer an exceptional Nurse-Family Partnership experience.”

Building on its program of in-person assistance, the Nurse Family Partnership co-designed an app (with Hopelab, a social innovation lab in collaboration with behavioral-science based software company Ayogo). The Goal Mama app builds upon the relationship between nurses and moms. It was developed with these clients in mind after research showed the majority of moms in the program were using their smartphones extensively, so this would help meet moms where they were. Through this approach of using technology and data to address the needs of their workforce and clients, they have served 309,787 moms across 633 counties and 41 states.

Another example is the work of Built for Zero, a national effort focused on the ambitious goal of ending homelessness across 80 cities and counties. Community organizers start with the personal challenges of the unhoused — they know that without understanding the person and their needs, they won’t be able to build successful interventions that get them housed. Their work combines a methodology of human-centered organizing with smart data science to deliver constant assessment and improvements in their work, and they have a collaboration with the Tableau foundation to build and train communities to collect data with new standards and monitor progress toward a goal of zero homelessness.

Good tech always starts small, tests, learns and improves with real users. Parties, governments and nonprofits should expand on the learning methods that are common to tech startups and espoused by Eric Reis in The Lean Startup. By starting with small tests and learning quickly, public-interest technology acknowledges the high stakes of building technology to improve democracy: real people’s lives are at stake. With questions about equity, justice, legitimacy and integrity on the line, starting small helps ensure enough runway to make important changes and work out the kinks.

Take for example the work of Alia. Launched by the National Domestic Workers Alliance (NDWA), it’s the first benefits portal for house cleaners. Domestic workers do not typically receive employee benefits, making things like taking a sick day or visiting a doctor impossible without losing pay.

Its easy-to-use interface enables people who hire house cleaners to contribute directly to their benefits, allowing workers to receive paid time off, accident insurance and life insurance. Alia’s engineers benefited from deep user insights gained by connecting to a network of house cleaners. In the increasing gig economy, the Alia model may be instructive for a range of employees across local, state and federal levels. Obama organizers in 2008 dramatically increased volunteerism (up to 18%) just by A/B testing the words and colors used for the call-to-action on their website.

There are many instructive public interest technologies that focus on designing not just for the user. This includes work in civil society such as Center for Civic Design, ensuring people can have easy and seamless interactions with government, and The Principles for Digital Development, the first of which is “design with the user.” There is also work being done inside governments, from the Government Digital Service in the U.K. to the work of the United States Digital Service, which was launched in the Obama administration.

Finally, it also helps to deeply understand the conditions in which technology will be used. What are the lived experiences of the people who will be using the tool? Did the designers dig in and attend a caucus to see how paper has captured the moving of bodies and changing of minds in gyms, cafes and VFW halls?

In the case of Iowa, it requires understanding the caucuses norms, rules and culture. A political caucus is a unique situation.

Not to mention, this year the Iowa Caucus deployed several process changes to increase transparency but also complexify the process, which needed to also be taken into account when deploying a tech solution. Understanding the conditions in which technology is deployed requires a nuanced understanding of policies and behavior and how policy changes can impact design choices.

Building a technical solution without doing the user-research to see what people really need runs the risk of reducing credibility and further eroding trust. Building the technology itself is often the simple part. The complex part is relational. It requires investing in capacity to engage, train, test and iterate.

We are accustomed to same-day delivery and instantaneous streaming in our private and social lives, which raises our expectations for what we want from the public sector. The push to modernize and streamline is what leads to believing an app is the solution. But building the next killer app for our democracy requires more than just prototyping a splashy tool.

Public-interest technology means working toward the broader, difficult challenge of rebuilding trust in our democracy. Every time we deploy tech for the means of modernizing a process, we need to remember this end goal and make sure we’re getting it right.