The Refactoring Tales f

Welcome to The Refactoring Tales, a book that documents some of the refactorings and changes I’ve made in recent (and mostly real-life) projects. This book isn’t going to teach you about language constructs, conditionals, functions, or so on, but hopefully offer insight into how to take steps to make your code more readable and more importantly, maintainable.

Think of how much time you spend maintaining code, rather than being able to write code from scratch. Day to day, I’m not typically creating new projects, but I am maintaining, editing or refactoring existing projects. This book is just like that. Each chapter will start by looking at some existing code, and over the course of a few pages we will examine, dissect and then refactor the code into an improved alternative. Of course, the idea of code being “better” is largely subjective, but even if you don’t quite agree with every step I take, you should be able to see the overall benefits.

No more clientside spaghetti. Organizing your code. | Human JavaScript f

Code is as much about people as it is about computers. Sure, it’s run by computers, but it’s written by, maintained by, and ultimately created for people. People are not computers. We are not robots. We are unpredictable, flawed, and irrational. The same people with the same tools and instructions won’t produce the same output each time. We generally don’t like being alone and we don’t work well in isolation. In fact, in order to do our best work we need to work with other people. None of these traits are bad things, quite the opposite. They’re what makes us who we are, they make us, well… human. Yet, as developers it’s easy for us to get so focused on optimizing for technology that we forget to optimize for people.

You can read about JavaScript, the language, elsewhere. Its good parts, bad parts, and ugly parts are well documented. This is a book about a specific set of tools, patterns, and approaches that we feel are optimized for people. These approaches enable our team to quickly build and deliver high-quality JavaScript applications for humans.

Markov Chains f

Markov chains, named after Andrey Markov, are mathematical systems that hop from one “state” (a situation or set of values) to another. For example, if you made a Markov chain model of a baby’s behavior, you might include “playing,” “eating”, “sleeping,” and “crying” as states, which together with other behaviors could form a ‘state space’: a list of all possible states. In addition, on top of the state space, a Markov chain tells you the probabilitiy of hopping, or “transitioning,” from one state to any other state—-e.g., the chance that a baby currently playing will fall asleep in the next five minutes without crying first.

First-person Hyperlapse Videos

3D Object Manipulation in a Single Photograph using Stock 3D Models

5 Alternatives to the Diamond Engagement Ring f

A first look inside the ambitious Harry Potter theme park opening next month f

Scoop: A Glimpse Into the NYTimes CMS f

What is Scoop?

Scoop (not to be confused with our mobile listings app, The Scoop) is The New York Times’s homegrown digital and (soon-to-be) print CMS. (We also use WordPress for many of our blogs.) Scoop was initially designed and developed in 2008 in close partnership with the newsroom. Unlike many commercial systems, Scoop does not render our website or provide community tools to our readers. Rather, it is a system for managing content and publishing data so that other applications can render the content across our platforms. This separation of functions gives development teams at The Times the freedom to build solutions on top of that data independently, allowing us to move faster than if Scoop were one monolithic system. For example, our commenting platform and recommendations engine integrate with Scoop but remain separate applications.

This post was written and edited in Scoop.

The vision for Scoop has evolved over the years. The beauty of a homegrown CMS is that we can shape its features and technology over time. Since its inception, the Scoop platform has been extended to include many new features such as sophisticated authoring and editing tools and workflows, budgeting, photo manipulation, video management and more robust content APIs. Its user base has swelled from a few dozen web producers to more than 1,000 users, including reporters, copy editors, photo editors and video producers.

Visualizing Algorithms f

Algorithms are a fascinating use case for visualization. To visualize an algorithm, we don’t merely fit data to a chart; there is no primary dataset. Instead there are logical rules that describe behavior. This may be why algorithm visualizations are so unusual, as designers experiment with novel forms to better communicate. This is reason enough to study them.

But algorithms are also a reminder that visualization is more than a tool for finding patterns in data. Visualization leverages the human visual system to augment human intellect: we can use it to better understand these important abstract processes, and perhaps other things, too.

This is an adaption of my talk atEyeo 2014. A video of the talk will be available soon. (Thanks, Eyeo folks!)

#Sampling

Before I can explain the first algorithm, I first need to explain the problem it addresses.

Van Gogh’s The Starry Night

Light — electromagnetic radiation — the light emanating from this screen, traveling through the air, focused by your lens and projected onto the retina — is a continuous signal. To be perceived, we must reduce light to discrete impulses by measuring its intensity and frequency distribution at different points in space.

This reduction process is called sampling, and it is essential to vision. You can think of it as a painter applying discrete strokes of color to form an image (particularly in Pointillism or Divisionism). Sampling is further a core concern of computer graphics; for example, to rasterize a 3D scene by raytracing, we must determine where to shoot rays. Even resizing an image requires sampling.

Sampling is made difficult by competing goals. On the one hand, samples should be evenly distributed so there are no gaps. But we must also avoid repeating, regular patterns, which causealiasing. This is why you shouldn’t wear a finely-striped shirt on camera: the stripes resonate with the grid of pixels in the camera’s sensor and cause Moiré patterns.

Photo: retinalmicroscopy.com

This micrograph is of the human retina’s periphery. The largercone cells detect color, while the smaller rod cells improve low-light vision.

The human retina has a beautiful solution to sampling in its placement of photoreceptor cells. The cells cover the retina densely and evenly (with the exception of the blind spot over the optic nerve), and yet the cells’ relative positions are irregular. This is called a Poisson-disc distribution because it maintains a minimum distance between cells, avoiding occlusion and thus wasted photoreceptors.

Unfortunately, creating a Poisson-disc distribution is hard. (More on that in a bit.) So here’s a simple approximation known as Mitchell’s best-candidate algorithm.

▶ Play

Best-candidate

You can see from these dots that best-candidate sampling produces a pleasing random distribution. It’s not without flaws: there are too many samples in some areas (oversampling), and not enough in other areas (undersampling). But it’s reasonably good, and just as important, easy to implement.

ROBOTS.TXT DISALLOW: 20 Years of Mistakes To Avoid f

The robots.txt was first officially rolled out 20 years ago today! Even though 20 years have passed, some folks continue to use robots.txt disallow like it is 1994.

Before jumping right into common robots.txt mistakes, it’s important to understand why standards and protocols for robots exclusion were developed in the first place. In the early 1990s, websites were far more limited in terms of available bandwidth than they are today. Back then it was not uncommon for automated robots to accidentally crash websites by overwhelming a web server and consuming all available bandwidth. That is why the Standard for Robot Exclusion was created by consensus on June 30, 1994. The Robots Exclusion Protocol allows site owners to ask automated robots not to crawl certain portions of their website. By reducing robot traffic, site owners can free up more bandwidth for human users, reduce downtime and help to ensure accessibility for human users. In the early 1990s, site owners were far more concerned about bandwidth and accessibility than URLs appearing in search results.

Throughout internet history sites like WhiteHouse.gov, the Library of Congress, Nissan, Metallica and the California DMV have disallowed portions of their website from being crawled by automated robots. By leveraging robots.txt and the disallow directive, webmasters of sites like these reduced downtime, increased bandwidth and helped ensure accessibility for humans. Over the past 20 years this practice has proved quite successful for a number of websites, especially during peak traffic periods.

Using robot.txt disallow proved to be a helpful tool for webmasters; however, it spelled problems for search engines. For instance, any good search engine had to be able to return quality results for queries like [white house], [metallica], [nissan] and [CA DMV]. Returning quality results for a page is tricky if you cannot crawl the page. To address this issue, Google extracts text about URLs disallowed with robots.txt from sources that are not disallowed with robots.txt. Google compiles this text from allowed sources and associates it with URLs disallowed with robots.txt. As a result, Google is able to return URLs disallowed with robots.txt in search results. One side effect of using robots.txt disallow was that rankings for disallowed URLs would typically decline for some queries over time. This side effect is the result of not being able to crawl or detect content at URLs disallowed with robots.txt.

What’s Up With That: Building Bigger Roads Actually Makes Traffic Worse f

Változnak a fogyasztói jogok az Unióban f

Life in the Atomic Do-ocracy f

Interface Vision f

Cost-Efficient Continuous Integration f

blog comments powered by Disqus
-