5 Alternatives to the Diamond Engagement Ring f

A first look inside the ambitious Harry Potter theme park opening next month f

Scoop: A Glimpse Into the NYTimes CMS f

What is Scoop?

Scoop (not to be confused with our mobile listings app, The Scoop) is The New York Times’s homegrown digital and (soon-to-be) print CMS. (We also use WordPress for many of our blogs.) Scoop was initially designed and developed in 2008 in close partnership with the newsroom. Unlike many commercial systems, Scoop does not render our website or provide community tools to our readers. Rather, it is a system for managing content and publishing data so that other applications can render the content across our platforms. This separation of functions gives development teams at The Times the freedom to build solutions on top of that data independently, allowing us to move faster than if Scoop were one monolithic system. For example, our commenting platform and recommendations engine integrate with Scoop but remain separate applications.

This post was written and edited in Scoop.

The vision for Scoop has evolved over the years. The beauty of a homegrown CMS is that we can shape its features and technology over time. Since its inception, the Scoop platform has been extended to include many new features such as sophisticated authoring and editing tools and workflows, budgeting, photo manipulation, video management and more robust content APIs. Its user base has swelled from a few dozen web producers to more than 1,000 users, including reporters, copy editors, photo editors and video producers.

Visualizing Algorithms f

Algorithms are a fascinating use case for visualization. To visualize an algorithm, we don’t merely fit data to a chart; there is no primary dataset. Instead there are logical rules that describe behavior. This may be why algorithm visualizations are so unusual, as designers experiment with novel forms to better communicate. This is reason enough to study them.

But algorithms are also a reminder that visualization is more than a tool for finding patterns in data. Visualization leverages the human visual system to augment human intellect: we can use it to better understand these important abstract processes, and perhaps other things, too.

This is an adaption of my talk atEyeo 2014. A video of the talk will be available soon. (Thanks, Eyeo folks!)

#Sampling

Before I can explain the first algorithm, I first need to explain the problem it addresses.

Van Gogh’s The Starry Night

Light — electromagnetic radiation — the light emanating from this screen, traveling through the air, focused by your lens and projected onto the retina — is a continuous signal. To be perceived, we must reduce light to discrete impulses by measuring its intensity and frequency distribution at different points in space.

This reduction process is called sampling, and it is essential to vision. You can think of it as a painter applying discrete strokes of color to form an image (particularly in Pointillism or Divisionism). Sampling is further a core concern of computer graphics; for example, to rasterize a 3D scene by raytracing, we must determine where to shoot rays. Even resizing an image requires sampling.

Sampling is made difficult by competing goals. On the one hand, samples should be evenly distributed so there are no gaps. But we must also avoid repeating, regular patterns, which causealiasing. This is why you shouldn’t wear a finely-striped shirt on camera: the stripes resonate with the grid of pixels in the camera’s sensor and cause Moiré patterns.

Photo: retinalmicroscopy.com

This micrograph is of the human retina’s periphery. The largercone cells detect color, while the smaller rod cells improve low-light vision.

The human retina has a beautiful solution to sampling in its placement of photoreceptor cells. The cells cover the retina densely and evenly (with the exception of the blind spot over the optic nerve), and yet the cells’ relative positions are irregular. This is called a Poisson-disc distribution because it maintains a minimum distance between cells, avoiding occlusion and thus wasted photoreceptors.

Unfortunately, creating a Poisson-disc distribution is hard. (More on that in a bit.) So here’s a simple approximation known as Mitchell’s best-candidate algorithm.

▶ Play

Best-candidate

You can see from these dots that best-candidate sampling produces a pleasing random distribution. It’s not without flaws: there are too many samples in some areas (oversampling), and not enough in other areas (undersampling). But it’s reasonably good, and just as important, easy to implement.

ROBOTS.TXT DISALLOW: 20 Years of Mistakes To Avoid f

The robots.txt was first officially rolled out 20 years ago today! Even though 20 years have passed, some folks continue to use robots.txt disallow like it is 1994.

Before jumping right into common robots.txt mistakes, it’s important to understand why standards and protocols for robots exclusion were developed in the first place. In the early 1990s, websites were far more limited in terms of available bandwidth than they are today. Back then it was not uncommon for automated robots to accidentally crash websites by overwhelming a web server and consuming all available bandwidth. That is why the Standard for Robot Exclusion was created by consensus on June 30, 1994. The Robots Exclusion Protocol allows site owners to ask automated robots not to crawl certain portions of their website. By reducing robot traffic, site owners can free up more bandwidth for human users, reduce downtime and help to ensure accessibility for human users. In the early 1990s, site owners were far more concerned about bandwidth and accessibility than URLs appearing in search results.

Throughout internet history sites like WhiteHouse.gov, the Library of Congress, Nissan, Metallica and the California DMV have disallowed portions of their website from being crawled by automated robots. By leveraging robots.txt and the disallow directive, webmasters of sites like these reduced downtime, increased bandwidth and helped ensure accessibility for humans. Over the past 20 years this practice has proved quite successful for a number of websites, especially during peak traffic periods.

Using robot.txt disallow proved to be a helpful tool for webmasters; however, it spelled problems for search engines. For instance, any good search engine had to be able to return quality results for queries like [white house], [metallica], [nissan] and [CA DMV]. Returning quality results for a page is tricky if you cannot crawl the page. To address this issue, Google extracts text about URLs disallowed with robots.txt from sources that are not disallowed with robots.txt. Google compiles this text from allowed sources and associates it with URLs disallowed with robots.txt. As a result, Google is able to return URLs disallowed with robots.txt in search results. One side effect of using robots.txt disallow was that rankings for disallowed URLs would typically decline for some queries over time. This side effect is the result of not being able to crawl or detect content at URLs disallowed with robots.txt.

What’s Up With That: Building Bigger Roads Actually Makes Traffic Worse f

Változnak a fogyasztói jogok az Unióban f

Life in the Atomic Do-ocracy f

Interface Vision f

Cost-Efficient Continuous Integration f

Layout in Flipboard for Web and Windows f

At Flipboard, we are working hard to build the world’s best personal magazine—a magazine made just for you, filled with the stories you care about most.

Magazine layout design plays a key role in telling those stories. Good layout design frames a story and impacts how you are informed by the content. For example, in the hallways of Sports Illustrated, editors hang up every page of the print edition to be reviewed and manually tweaked before publication.

When you read Flipboard, articles and photographs are laid out in a series of pages you can flip through, just like in a print magazine. Each magazine pagelayout feels hand-crafted and beautiful—as if editors and designers created it just for you.

How do we automate the whole process of layout design and editing? By slotting your content into custom designed page layouts—like fitting puzzle pieces together. We start with a set of page layouts created by human designers. Then, our layout engine figures out how to best fit your content into these layouts—considering things like page density, pacing, rhythm, image crop and scale.

In many ways, that is the key to Flipboard’s signature look and feel: at its heart are the work of real designers.

In the Beginning

In 2010, we built Flipboard Pages, a layout engine that turns web page articles into magazine pages for the iPad.

Flipboard Pages paginates content from world-class publications includingVanity Fair and National Geographic.

Pages can produce beautiful layouts, replicating the brand identity and custom typography of each publication. Pages used CSS3, SVG and vanilla JavaScript to make rendering as high fidelity and performant as possible on constrained mobile devices. (such as the original iPad running iOS 3.2) The download footprint for a publication’s layouts averaged around 90K for layouts, styling, fonts and nameplate images–lighter than the equivalent web page or a single photograph from an article.

A designer first creates a set of about 20 page layouts, divided up into portrait (768x1004) and landscape (1024x748) orientations. From this set, Pages selects the layout that best fits the desired content, inserts the content into the layout, and produces a final page. With this example-based approach, we rely on designers to make layouts clear, distinct and beautiful.

While Pages could create great layouts, they only worked at a specifically designed size.

Web and Windows 81 presented a new challenge: Users can resize browser windows to any size, at any time. To support arbitrary sizes, we needed something better.

Don't Help Your Kids With Their Homework f

A Day of Communication at GitHub f

How does Scooby Doo and the gang have enough money to travel the world and solve mysteries for free? f

The Asshole Answer:  It’s a cartoon, dumbass.

The Perverted Answer:  Velma and Daphne are call girls.  

The Stoner Answer: Shaggy is a pot dealer.

The Cynical Answer:  Shaggy is a pot dealer.

The Optimistic Answer:  Shaggy is a pot dealer.

The Businessman Answer:  Shaggy is a pot dealer.

The Practical Answer: The gang probably charges a fee for their services. 

The Real Answer: Shaggy is definitely a pot dealer.

Camera Develops Pictures With Algorithms, Not Lenses f

blog comments powered by Disqus
-