Links: 4/16

Beau Lyddon
Real Kinetic Blog
Published in
12 min readApr 16, 2018

--

Business / Government / Management / News

Come easy, go easy: The Tech Takedown!

If there is one thing that I have learned about markets over the years, it is that they have a way of leveling egos and cutting companies and investors down to size. The last three weeks have been humbling ones for tech companies, especially the big four (Facebook, Amazon, Netflix and Alphabet or FANG) which seemed unstoppable in their pursuit of revenues and ever-rising market capitalizations, and for tech investors, many of whom seem to have mistaken luck for skill. Not surprisingly, some of the cheerleaders who were just a short while ago telling us that nothing could go wrong with these companies are in the midst of a mood shift, where they are convinced that nothing can go right with them. As Mark Zuckerberg gets ready to testify to Congress, amidst calls for both regulating and perhaps even breaking up tech companies, it is time to take a sober look at where we stand with these companies, what the last three weeks have changed and the consequences for investment decisions.

  • The tech companies are finally more than just darlings. Which means we might actually see these companies become value buys.

The Internet Apologizes …

Something has gone wrong with the internet. Even Mark Zuckerberg knows it. Testifying before Congress, the Facebook CEO ticked off a list of everything his platform has screwed up, from fake news and foreign meddling in the 2016 election to hate speech and data privacy. “We didn’t take a broad enough view of our responsibility,” he confessed. Then he added the words that everyone was waiting for: “I’m sorry.”

There have always been outsiders who criticized the tech industry — even if their concerns have been drowned out by the oohs and aahs of consumers, investors, and journalists. But today, the most dire warnings are coming from the heart of Silicon Valley itself. The man who oversaw the creation of the original iPhone believes the device he helped build is too addictive. The inventor of the World Wide Web fears his creation is being “weaponized.” Even Sean Parker, Facebook’s first president, has blasted social media as a dangerous form of psychological manipulation. “God only knows what it’s doing to our children’s brains,” he lamented recently.

To understand what went wrong — how the Silicon Valley dream of building a networked utopia turned into a globalized strip-mall casino overrun by pop-up ads and cyberbullies and Vladimir Putin — we spoke to more than a dozen architects of our digital present. If the tech industry likes to assume the trappings of a religion, complete with a quasi-messianic story of progress, the Church of Tech is now giving rise to a new sect of apostates, feverishly confessing their own sins. And the internet’s original sin, as these programmers and investors and CEOs make clear, was its business model.

  • So, I have to ask … if many of these companies weren’t full of folks that lean left and if it wasn’t Donald Trump that appears to have benefited from these issues, would we still see the outrage from the employees?
  • And I fear the “solutions” they propose aren’t realistic solutions. “We need regulations”, “we need laws” sound nice. But how would they work in practice? Would they actually accomplish what we want? At this point what even do we want?
  • This reminds me of what happens at many companies when there’s an incident. “OMG, we’re down!”. “We must do something.” “We haven’t found the issue.” “It doesn’t matter. We must do something!”

Systems / Infrastructure / Cloud

Through the looking glass: Security and the SRE

Over the last few years, DevOps, chaos engineering, and site reliability engineering (SRE) have made foundational shifts in the engineering community worldwide. The discipline of security engineering has gradually made its way into DevOps with DevSecOps, rugged DevOps, and other name variants. Our purpose in this article, which focuses on applying chaos engineering and SRE to the field of cybersecurity, is to share the insights we’ve gathered in our journey and to challenge the community to think differently about how security systems are designed.

  • Chaos and experiment all the things!
  • Push the system

THE SERVERLESS SERIES — Automating IT Engineers & Reshaping Tech Leadership

Nope, the Cloud has not made all engineers more productive. Instead, many were made redundant, while others were empowered in new ways that keep changing.

Today, the Cloud’s next iteration is Serverless. In this article, we will discuss how the Cloud has continuously empowered the fittest engineers, how the Serverless trend carries on reshaping tech leadership and what that means for businesses.

Ten years ago, the Cloud marketing spiel was around the substantial cost savings of just-in-time infrastructure provisioning. IaaS (Infrastructure as a Service) like AWS and PaaS (Platform as a Service) like Microsoft Azure were offering the ability for SysAdmins to configure and spin-off machines on-demand and at scale. While it was technically legitimate, the vast majority of large companies, including the one I was working for at the time, were incapable of adopting it quickly. The reason was simple:

The real effect of the Cloud was to redefine technical leadership, and large organisations were simply not ready for that… But startups were!

  • That last line feels accurate and is something we hammer on. Cloud, DevOps, Serverless are about cultural changes. And not just within R&D but across the company. These concepts have moved IT from a cost center to a differentiating capability for your business. In other words get you CTO out from under the CFO and start changing now or get beat by those that are.

No, seriously. Root Cause is a Fallacy.

I’m just back from attending SREcon ’18 Americas in Santa Clara last week, an incredible conference I’ve spoken at before in Dublin in 2016 as a tutorial, but never in the U.S. You can find some blog posts written about specifics (Day 1, Day 2, Day 3), but I wouldn’t be able to do it justice myself, so read those! Kudos to everyone involved in the hard work making it run so smoothly, I can only imagine everything that went into doing so. Presenters were warm and welcoming with deep insights to share and the attendees full of great questions, appending their own experiences to topics at hand. I was also able to meet up with some old friends and make a bunch of new ones. Everything you can hope for when attending a conference.

I wanted to expand a bit on a particular that I mentioned in my talk that seemed to elicit questions from folks after in person and on twitter: Root Cause is a Fallacy. We’ve used root cause as a shortcut for explaining away problems for a long time, typically as part of RCA (Root Cause Analysis). I’m not the first to write about this. I doubt I’m even the hundredth, and I probably won’t be the last. But we’re still lazily falling back to using it, so it’s good to reinforce.

  • SREcon sounds great and I’m going to add it to my list of conferences to look into.
  • Some of this post may come off as nitpicky but I like it. Especially at the end where Will gives examples of better terminology. We have to get people thinking differently about our systems. But unfortunately, humans don’t like thinking probabilistically, abstractly, etc. We want binary. Give me the “thing”. But that’s just not how the world works

Programming

Incrementally Improving The DOM

Last time, I tried to convince you that you might not need the virtual DOM, and that many common UI patterns can be reproduced with a completely static page, with changes only happening at the leaves of the tree — attributes and text nodes. For some trickier UI patterns, I added back a limited form of dynamic behavior, by allowing elements with dynamic lists of children.

It is perhaps not terribly surprising that this is possible, since it is, after all, what we used to do before React popularized the virtual DOM (using things like Mustache templates).

The static DOM approach has some limitations of its own, however:

- Dynamic arrays are optimized for modification at the end of the array. Modifications in the middle of an array can trigger a cascade of updates to nodes at the end of the array. In practice, this is not a big problem, but for large arrays it can become a performance issue. One solution to this problem is to create an alternative structure for looping, where the inner template does either not have access to its index, or where the indices do not correspond to the position of the element in the parent array.

- In order to trigger a UI change, however small, we need to construct a new model for the entire static DOM component. Again, in practice, this is not a big problem, but it does make it harder to do certain things. For example, if we wanted to send model changes to the server for evaluation, we would have a hard time.

- Every change is potentially observed by every node in the static DOM. We can use tricks like filtering out duplicate events from our event streams, but this takes unnecessary time and CPU cycles. Recall, the motivation for the static DOM was that we intuitively knew which elements should receive the events for small model changes such as changing a single text node. The challenge is to convince the machine that this connection between submodels and elements is obvious!

In this post, I’d like to suggest a different approach, which solves these problems but keeps the benefits of the static DOM approach.

  • Phil is so good at this. His ability to see these patterns at an abstract level and apply them in a beautifully practical way.

Math / Science / Behavior / Economics

PODCAST: Hidden Brain — Tunnel Vision

Have you ever noticed that when something important is missing in your life, your brain can only seem to focus on that missing thing?

Two researchers have dubbed this phenomenon scarcity, and they say it touches on many aspects of our lives.

“It leads you to take certain behaviors that in the short term help you to manage scarcity, but in the long term only make matters worse,” says Sendhil Mullaianathan, an economics professor at Harvard University.

Several years ago, he and Eldar Shafir, a psychology professor at Princeton, started researching this idea. Their theory was this: When you’re really desperate for something, you can focus on it so obsessively there’s no room for anything else. The time-starved spend much of their mental energy juggling time. People with little money worry constantly about making ends meet.

Scarcity takes a huge toll. It robs people of insight. And it helps to explain why, when we’re in a hole, we sometimes dig ourselves even deeper.

This week on Hidden Brain, we’ll explore the concept of scarcity and how it affects people across the globe — from sugar cane farmers in India to time-starved physicians in the United States.

  • This was fantastic.
  • The first part is great for empathy building for those having experiences that we may not have had ourselves.
  • The second half introduces something that I’m guessing many of us can relate to.

Blockchain / Crypto

Blockchain is not only crappy technology but a bad vision for the future

Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction.

  • Since I post so many on the positive side I need to make sure to include pieces from the more pessimistic side.
  • And there are very valid points made in this post. The most obvious being that the specific technology isn’t always a solution on its own. It’s how humans interact with it that matters. Those of us in tech too often forget this.

AI / Machine Learning / Data Science / Statistics

https://www.smbc-comics.com/comic/moneybattle

Lessons Learned Reproducing a Deep Reinforcement Learning Paper

There are a lot of neat things going on in deep reinforcement learning. One of the coolest things from last year was OpenAI and DeepMind’s work on training an agent using feedback from a human rather than a classical reward signal. There’s a great blog post about it at Learning from Human Preferences, and the original paper is at Deep Reinforcement Learning from Human Preferences.

Learn some deep reinforcement learning, and you too can train a noodle to do backflip. From Learning from Human Preferences.

I’ve seen a few recommendations that reproducing papers is a good way of levelling up machine learning skills, and I decided this could be an interesting one to try with. It was indeed a super fun project, and I’m happy to have tackled it — but looking back, I realise it wasn’t exactly the experience I thought it would be.

If you’re thinking about reproducing papers too, here are some notes on what surprised me about working with deep RL.

  • This is a nice walkthrough of somebody doing something complex. I love posts like these.
  • This entire process and his takeaways are what I generally see and feel when going through and learning about any complex system. Which is what I think we can break this down to. It’s not so much about “programming” or “research” as it is stepping through complexity.

Real Kinetic Links for the Week

More Environments Will Not Make Things Easier

Microservices are hard. They require extreme discipline. They require a lot more upfront thinking. They introduce integration challenges and complexity that you otherwise wouldn’t have with a monolith, but service-oriented design is an important part of scaling organization structure. Hundreds of engineers all working on the same codebase will only lead to angst and the inability to be nimble.

This requires a pretty significant change in the way we think about things. We’re creatures of habit, so if we’re not careful, we’ll just keep on applying the same practices we used before we did services. And that will end in frustration.

How can we possibly build working software that comprises dozens of services owned by dozens of teams? Instinct tells us full-scale integration. That’s how we did things before, right? We ran integration tests. We run all of the services we depend on and develop our service against that. But it turns out, these dozen or so services I depend on also have their own dependencies! This problem is not linear.

Tyler and I spoke at DevOps Days Des Moines this past week and had a wonderful time. Here are the slides with references for our presentations. Video coming some time in the future.

Tyler’s: The Future of Ops

https://speakerdeck.com/tylertreat/the-future-of-ops

Beau’s: What is Happening?

https://speakerdeck.com/lyddonb/what-is-happening-attempting-to-understand-our-systems

If you’re looking for help with your architecture or development organization feel free to reach out: realkinetic.com @real_kinetic

You can follow me directly @lyddonb

--

--