Find: A Bewitching Look at Migration Patterns Among American States

The color, curvature and symmetry here is especially appealing. However, the form doesn't scale well: it's hard to see the flows of all states at once, and the bidirectional nature of the flows aren't represented. A centered matrix would communicate more clearly, though some pleasing curvature would be gone. 

Good inspiration for the nc innovation data. County to county flows? 

*** 
 
// published on Latest Posts | The Atlantic Cities // visit site
A Bewitching Look at Migration Patterns Among American States

New York residents must really get sick of the winter snow and gloom. How else to explain that more of them moved to Florida in 2012 than any other state?

That's just one of the fascinating nuggets of demographic trivia waiting to be uncovered in this wild-looking visualization of state-to-state migrations. The prismatic, arc-veined portal – like peering into the scope of an alien hyper-rifle – shows the movements of the roughly 7.1 million Americans who relocated across state lines in 2012. It's based on the U.S. Census Bureau's American Community Survey, an annual tabulation of moves that just so happens to include the involuntary uprootings of prisoners and members of the military.

"Restless America" is the work of Chris Walker, a data-analytics virtuoso in Mumbai who also made that clever visualization of property values in New York City. As to why he embarked on this project, Walker explains via email:

I'm really interested in migration, as I think migration patterns show that people still see opportunity and hope for better lives, and they're willing to take risks. I see migration as a form of 'creative destruction'; it renews and enriches some communities while eroding others. This process strains individual cities, but I think it's healthy for the country overall. People need to dream and be allowed to act on their dreams. I wanted to show this on a national scale.

The graphic may look like spaghetti pie at first glance, but it really is beautifully simple once you learn how to navigate it. Here's Walker explaining about that:

The visualization is a circle cut up into arcs, the light-colored pieces along the edge of the circle, each one representing a state. The arcs are connected to each other by links, and each link represents the flow of people between two states. States with longer arcs exchange people with more states (California and New York, for example, have larger arcs). Links are thicker when there are relatively more people moving between two states. The color of each link is determined by the state that contributes the most migrants, so for example, the link between California and Texas is blue rather than orange, because California sent over 62,000 people to Texas, while Texas only sent about 43,000 people to California. Note that, to keep the graphic clean, I only drew a link between two states if they exchanged at least 10,000 people.

For an example, let's go back to New York. If you put the mouse pointer over the state name, the graphic quickly informs you that more people recently exited than entered – 405,864 to 270,053, respectively. It also resolves into this minimalist view:

Gray strings represent all the states that New York sent more than 10,000 people to in 2012. The thickest band runs to Florida; click on it and you'll see that 53,009 New Yorkers headed for the Sunshine State and are perhaps appearing in Florida Man's Twitter feed this very instant. Conversely, 27,392 Floridians moved to New York and might now be experiencing the joy of $14.50 packs of cigarettes.

Regarding the uneven transfer of bodies between these particular states, Walker writes that his "hunch is that these are retirees" decamping for the balmy Southeast. Other popular destinations for people escaping from New York include New Jersey and Pennsylvania, which Walker has a theory for, as well: "More likely these folks are leaving pricey New York City for more affordable suburbs in neighboring states."

Not all interstate transmissions are this lucid. Take a look at California, for instance, which last year had migration pathways to more than 30 states:

With such a geyser of colored lines, it might be hard to immediately fathom a most basic point that in 2012 more people left California (566,986) than entered (493,641). Walker believes the imbalance may be due to residents tired of exorbitant prices seeking a lower cost of living. Here are a few more of his insights:

  • Migrants are flocking to Florida. Interestingly the state contributing the most migrants to Florida is neighboring Georgia. Texas, New York, and North Carolina are the next largest contributors.
  • Texas is the second-largest destination for migrants. Over 500,000 people moved to Texas in 2012. People tend to come from the Southeast, Southwest, and the West, with the biggest contributor being California. 62,702 Californians packed up and moved to the Lone Star state in 2012.
  • Most people leaving DC tend to stay in the area, opting for Virginia or Maryland. The economy of DC, centered around the federal government, seems to discourage more distant migrations.
  • The migrants who leave two very cold states, Maine and Alaska, have very clear preferences. Their most popular destinations are Florida and California.

Images created by Chris Walker

Find: AMD launches first integrated cpu/gpu

With mantle

*** 
 
 // published on AnandTech // visit site
AMD Kaveri APU Launch Details: Desktop, January 14th

Kicking off today is AMD’s annual developer conference, which now goes by the name APU13. There will be several APU/CPU related announcements coming out of the show this week, but we’ll start with what’s likely to be the most interesting for our regular readers: the launch date for AMD’s Kaveri APU.

First and foremost, AMD has confirmed that Kaveri will be shipping in Q4’13, with a launch/availability date of January 14th, 2014. For those of you keeping track of your calendars, this is the week after CES 2014, with AMD promising further details on the Kaveri launch for CES.

Second of all, we have confirmation on what the highest shipping APU configuration will be. Kaveri will have up to 4 CPU core (2 modules), which will be based on AMD’s latest revision of their desktop CPU architecture, Steamroller. Meanwhile the GPU will be composed of 8 GCN 1.1 CUs, which would put the SP count at 512 SPs (this would be equivalent to today's desktop Radeon HD 7750). Furthermore AMD is throwing around a floating point performance number – 856 GFLOPS – which thanks to some details found in AMD's footnotes by PCWorld gives us specific clockspeeds and even a product name. A10-7850K CPU clockspeed 3.7GHz, GPU clockspeed 720MHz.

Third, in a departure from how AMD launched Trinity and Richland, Kaveri will be coming to the desktop first. The January 14th date is for the availability of desktop socket FM2+ Kaveri APUs, with server and mobile APUs to follow (these are presumably some of the CES details to come). Pricing and specific SKUs will of course be announced at a later time, and there wasn’t any clarification on whether this was just for OEM hardware, or if we’ll be seeing retail CPUs too.

Finally, AMD has confirmed on the GPU side that Kaveri will be shooting for feature parity with AMD’s latest discrete GPUs, by supporting many of the same features. Specifically, TrueAudio will be making an appearance on Kaveri, bringing AMD’s dedicated audio processing block to their APUs as well as their GPUs. On the discrete GPUs this is a move that was mostly about functionality, but on Kaveri it should take on a second role due to the fact that it’s exactly the kind of CPU-constrained environment for which having dedicated hardware will be a boon. Furthermore, AMD has also confirmed that their new low-level API, Mantle, will also be supported on Kaveri – it is after all a GCN based GPU.

For AMD Kaveri is going to be a big deal; likely the biggest CPU/APU launch for the company in quite some time. Since the acquisition of ATI all the way back in 2006 this is what the company has been building up to: producing a processor with a highly integrated CPU/GPU that allows both of them to be leveraged nearly-transparently by software. Kaveri is the launch vehicle for HSA both as a specific standard and as a general concept for a PC CPU/APU, so it’s something that everyone inside and outside of AMD will be watching closely.


Find : New wave of online peer review and discussion tools frightens some scientists

Interesting ...
  
*** 
 
// published on Ars Technica // visit site
New wave of online peer review and discussion tools frightens some scientists
Sites like Publons and PubPeer hope to quicken the pace of scientific conversation.

Earlier this year, I wrote a story about a new HIV/Aids detection kit that was under development. Since that time, the same group has published two more papers on the same topic, but questions are starting to be asked about the original research. The questions were so simple that I was pretty embarrassed I didn't spot the problems on my own.

But I wouldn't have gotten even that far were it not for the new directions that peer review and social media are taking science. I was alerted to the problems by twitter user @DaveFernig, pointing me to a discussion about the paper on PubPeer.

Before getting to that, let's recap what impressed me about the HIV detection paper. It achieved a couple of things that made it stand out from a veritable truckload of similar proof-of-principle experiments. The test was very sensitive—so sensitive that it could detect viral loads below that of the standard test and may even reach single molecule sensitivity. When someone uses single-molecule sensitivity, I tend to get all hot and bothered and all my critical thinking faculties vanish for a while.

Read 22 remaining paragraphs

Find: Finally, a modular phone architecture - Motorola's Project Ara

A great idea, for google, motorola and everyone. Will set motorola apart from the crowd, which it sorely needs, and will set us all on the path toward cheaper more renewable mobile devices, which we all sorely need. Improves experience with a new way for users to express themselves through customization. 

Phones are small enough now that I think the extra space modularity requires won't be a serious problem.

Really hope motorola sees this through. 

*** 
 
// published on AnandTech // visit site
Motorola's Project Ara: Phonebloks from an OEM

Phonebloks was a campaign that focused upon attracting the interest of OEMs by showing that there was an incredible amount of interest for a modular phone. This was mostly for reasons of reducing electronics waste, the potential for incredible customization, and the potential for reduced upgrade costs associated with the 1-2 year upgrade cycle. As the current model requires the purchase of an entire phone, upgrading a single “module”, or a set of modules that would update the device would reduce the cost of upgrading to the consumer, much like the current desktop PC system of upgrading individual components.

However, at the time it seemed unlikely that such a campaign would ever produce a meaningful result in the industry. Now, it might be less so as Motorola announced Project Ara, a platform that promises the same modularity that the Phonebloks campaign was promoting, and has also partnered with the creator of the Phonebloks campaign for this project.The concept is largely the same, with an endoskeleton and modules that make up the phone. The display, following the Phonebloks concept, is also likely to be its own module. While actual details of the concept are effectively nil, there are still an enormous number of challenges that such a design would face.

The first would be from a purely hardware perspective, as there is an unavoidable tradeoff between volumetric efficiency and modularity in such a design. While modern smartphones are effectively a tight stack of PCB, battery, and display, this adds in an entire interface for each module that connects them together. This means that the memory module would effectively go from the size of an average eMMC chip to around a full-size SD card due to the need for a durable interface that would connect it to the rest of the phone. This is most readily seen by the differences between the international and Korean LG G2, as the international variant has a ~15% larger battery by virtue of the sealed design that allowed for LG Chemicon’s curved battery pack with thinner walls to allow for more battery capacity.

The second issue in this case would be regulatory, as the FCC only tests single configurations for approval. Such a design would be incredibly challenging to get approval for as there could easily be unpredictable RF behavior from unexpected behavior from a specific setup of modules, or issues with the endoskeleton portion because the modules aren't all part of a single PCB that is unlikely to suffer issues with short circuits or other connection issues, while a modular design would face such challenges.

The final major issue is that of history, as the failure of Intel’s Whitebook initiative from 2006 makes it much harder to see a similar initiative succeeding in the smartphone space. As the Whitebook initiative promised a DIY, modular laptop, much like Phonebloks and Project Ara, and failed due to the rise of completely integrated laptop designs such as the Apple rMBP line, it seems unlikely that such a project would succeed without significant compromise, either in modularity or in competitiveness with the more integrated smartphones. While laptops like the rMBP are effectively impossible for the user to repair, much less open, they have become incredibly popular, and the PC OEMs have followed Apple’s lead in this regard, with consumer demand generally tending towards thinner and lighter laptops, just as the same demand seems to occur in the smartphone space, it is difficult to see such an initiative succeeding. While such initiatives are sure to garner widespread enthusiast support, enthusiasts generally lose their ability to influence the market once a market segment becomes popular with general consumers, as can be seen by the PC industry. However, it remains to be seen whether there is mass-market appeal for such a phone, and it may well be that Motorola is tapping a niche with enormous potential.

Find: Intel Opens Fabs to Competing Chips

Nvidia gpus and motorola arm socs fabbed by intel in the usa. Whooda thunk it? 

Intel is generally one gen ahead in chip size, so those who contract with them will have a power performance advantage. Gpus, for example, could see a sudden jump in speed. Phones could get improved battery life. 

You know that things are bad at intel if they can't use all their fab capacity. 

*** 
 
// published on AnandTech // visit site
Intel Opens Fabs to Competing Chips

In a story posted today on Forbes, Altera has announced that they have entered into a partnership with Intel to have their next generations 64-bit ARM chips produced at Intel’s fabs. Details on precisely what process technology will be used on the upcoming chips are scant, but 22nm would give anyone willing to pay Intel’s price a leg up on the competition, and of course Intel will be moving to 14nm in the future. Really, this announcement would be interesting even if someone were to merely use Intel’s older 32nm fabs.

Intel has apparently inked deals with other companies as well. The Inquirer has this quote from an Intel spokesperson: “We have several design wins thus far and the announcement with Altera in February is an important step towards Intel's overall foundry strategy. Intel will continue to be selective on customers we will enable on our leading edge manufacturing process.”

The key there is the part about being “selective”, but I would guess it’s more a question of whether a company has the volume as well as the money to pay Intel, rather than whether or not Intel would be willing to work with them. This announcement opens the doors for future opportunities – NVIDIA GPUs on Intel silicon would surely be interesting, but given that AMD has gone fabless as well we could also see their future CPUs/GPUs fabbed by Intel.

If we take things back another step, the reality of the semiconductor business is that fabs are expensive to build and maintain. Then they need to be updated every couple of years to the latest technology, or at least new fabs need to be built to stay competitive. If you can’t run your fabs more or less at capacity, you start to fall behind on all fronts. If Intel can more than utilize all of their fabrication assets, it’s a different story, but that era appears to be coming to a close.

The reason for this is pretty simple. We’re seeing a major plateau in terms of the computing performance most people need on a regular basis these days. Give me an SSD and I am perfectly fine running most of my everyday tasks on an old Core 2 Duo or Core 2 Quad. The difference between Bloomfield, Sandy Bridge, Ivy Bridge, and Haswell processors is likewise shrinking each generation – my i7-965X that I’m typing this on continues to run very well, thank you very much! If people and businesses aren’t upgrading as frequently, then you need to find other ways to keep your fabs busy, and selling production to other companies is the low hanging fruit.

Regardless of the reasons behind the move, this marks a new era in Intel fabrication history. It will be interesting to see what other chips end up being fabbed at Intel over the next year or two. 

Find: NVIDIA's G-Sync reinvents display

It's like clean air: doesn't look like much, but once you've experienced it firsthand, you can't go back. 

This will start slowly but rapidly become the new standard. 
 
 
// published on AnandTech // visit site
NVIDIA's G-Sync: Attempting to Revolutionize Gaming via Smoothness

Earlier today NVIDIA announced G-Sync, its variable refresh rate technology for displays. The basic premise is simple. Displays refresh themselves at a fixed interval, but GPUs render frames at a completely independent frame rate. The disconnect between the two is one source of stuttering. You can disable v-sync to try and work around it but the end result is at best tearing, but at worst stuttering and tearing.

NVIDIA's G-Sync is a combination of software and hardware technologies that allows a modern GeForce GPU to control a variable display refresh rate on a monitor equipped with a G-Sync module. In traditional setups a display will refresh the screen at a fixed interval, but in a G-Sync enabled setup the display won't refresh the screen until it's given a new frame from the GPU.

NVIDIA demonstrated the technology on 144Hz ASUS panels, which obviously caps the max GPU present rate at 144 fps although that's not a limit of G-Sync. There's a lower bound of 30Hz as well, since anything below that and you'll begin to run into issues with flickering. If the frame rate drops below 30 fps, the display will present duplicates of each frame.

There's a bunch of other work done on the G-Sync module side to deal with some funny effects of LCDs when driven asynchronously. NVIDIA wouldn't go into great detail other than to say that there are considerations that need to be taken into account.

The first native G-Sync enabled monitors won't show up until Q1 next year, however NVIDIA will be releasing the G-Sync board for modding before the end of this year. Initially supporting Asus’s VG248QE monitor, end-users will be able to mod their monitor to install the board, or alternatively professional modders will be selling pre-modified monitors. Otherwise in Q1 of next year ASUS will be selling the VG248QE with the G-Sync board built in for $399, while BenQ, Philips, and ViewSonic are also committing to rolling out their own G-Sync equipped monitors next year too. I'm hearing that NVIDIA wants to try and get the module down to below $100 eventually. The G-Sync module itself looks like this:

There's a controller and at least 3 x 256MB memory devices on the board, although I'm guessing there's more on the back of the board. NVIDIA isn't giving us a lot of detail here so we'll have to deal with just a shot of the board for now.

Meanwhile we do have limited information on the interface itself; G-Sync is designed to work over DisplayPort (since it’s packet based), with NVIDIA manipulating the timing of the v-blank signal to indicate a refresh. Importantly, this indicates that NVIDIA may not be significantly modifying the DisplayPort protocol, which at least cracks open the door to other implementations on the source/video card side.

Although we only have limited information on the technology at this time, the good news is we got a bunch of cool demos of G-Sync at the event today. I'm going to have to describe most of what I saw since it's difficult to present this otherwise. NVIDIA had two identical systems configured with GeForce GTX 760s, both featured the same ASUS 144Hz displays but only one of them had NVIDIA's G-Sync module installed. NVIDIA ran through a couple of demos to show the benefits of G-Sync, and they were awesome.

The first demo was a swinging pendulum. NVIDIA's demo harness allows you to set min/max frame times, and for the initial test case we saw both systems running at a fixed 60 fps. The performance on both systems was identical as was the visual experience. I noticed no stuttering, and since v-sync was on there was no visible tearing either. Then things got interesting.

NVIDIA then dropped the frame rate on both systems down to 50 fps, once again static. The traditional system started to exhibit stuttering as we saw the effects of having a mismatched GPU frame rate and monitor refresh rate. Since the case itself was pathological in nature (you don't always have a constant mismatch between the two), the stuttering was extremely pronounced. The same demo on the g-sync system? Flawless, smooth.

NVIDIA then dropped the frame rate even more, down to an average of around 45 fps but also introduced variability in frame times, making the demo even more realistic. Once again, the traditional setup with v-sync enabled was a stuttering mess while the G-Sync system didn't skip a beat.

Next up was disabling v-sync with hopes of reducing stuttering, resulting in both stuttering (still refresh rate/fps mismatch) and now tearing. The G-Sync system, once again, handled the test case perfectly. It delivered the same smoothness and visual experience as if the we were looking at a game rendering perfectly at a constant 60 fps. It's sort of ridiculous and completely changes the overall user experience. Drops in frame rate no longer have to be drops in smoothness. Game devs relying on the presence of G-Sync can throw higher quality effects at a scene since they don't need to be as afraid of drops in frame rate excursions below 60 fps.

Switching gears NVIDIA also ran a real world demonstration by spinning the camera around Lara Croft in Tomb Raider. The stutter/tearing effects weren't as pronounced as in NVIDIA's test case, but they were both definitely present on the traditional system and completely absent on the G-Sync machine. I can't stress enough just how smooth the G-Sync experience was, it's a game changer.

The combination of technologies like GeForce Experience, having a ton of GPU performance and G-Sync can really work together to deliver a new level of smoothness, image quality and experience in games. We've seen a resurgence of PC gaming over the past few years, but G-Sync has the potential to take the PC gaming experience to a completely new level.

Update: NVIDIA has posted a bit more information about G-Sync, including the specs of the modified Asus VG248QE monitor, and the system requirements.

NVIDIA G-Sync System Requirements
Video Card GeForce GTX 650 Ti Boost or Higher
Display G-Sync Equipped Display
Driver R331.58 or Higher
Operating System Windows 7/8/8.1

 

Chrome extension uses colored text to speed up online reading

Interesting idea. Wonder if altering lines might actually slow you down?

*** 
 
// published on The Verge - All Posts // visit site
Chrome extension uses colored text to speed up online reading

Lots of apps have offered the promise of reading text faster, but a Chrome extension called Beeline Reader is using an unexpected tool to get there: colored text. Built on top of the Readability code, the extension works by reformatting the text on a page into a single stripped-down column, then color-coding alternating lines of text to ensure readers never get lost. According to a recent study, that's enough to get the average person through a block of text ten percent faster.

While the trick itself is simple, there's a surprising amount of psychological background to it. The developer tells Fast Company that he was inspired by the Stroop Test in psychology, which shows that readers inevitably perceive the color of the text they're...

Continue reading…

Find: AMD exploits its console wins by bringing its low level mantle api back to the pc

This could be a big deal. Amd tries to disrupt the api status quo with its new console leverage. 

*** 
 
 // published on AnandTech // visit site

Understanding AMD’s Mantle: A Low-Level Graphics API For GCN

Wrapping up our AMD product showcase coverage, AMD’s final announcement of the day was a very brief announcement about a new API called Mantle. Mantle is something of an enigma at this point – AMD isn’t saying a whole lot until November with the AMD Developer Summit – and although it’s conceptually simple, the real challenge is trying to understand where it fits into AMD’s product strategy, and perhaps more importantly what brought them to this point in the first place. So although we don’t have all of the necessary details in hand quite yet, we wanted to spend some time breaking down the matters surrounding Mantle as much as we reasonably can.

Find: Analyzing Valve’s SteamOS, Steam Machines, and Steam Controller

Smart. A third alternative. 

*** 
 
 // published on AnandTech // visit site
Analyzing Valve’s SteamOS, Steam Machines, and Steam Controller Announcements

In 2012, Valve released an update to their Steam platform called Big Picture, which essentially consisted of a new user interface tailored towards the needs of the living room where people use large HDTVs and gamepads in place of the usual keyboard and mouse interface. We’ve seen 10-foot UIs before – Windows Media Center and most of the game console interfaces being prime examples – and they’re pretty much required if you want a UI people can use while sitting on the couch. Along with Big Picture, the past several years have also seen Valve and Steam branch out from being a Windows-only software solution to something that’s available on OS X, Linux, and even (in a more limited fashion) on the PlayStation 3. Not every game within Steam is currently available on every platform, but increasingly we’re seeing more titles launch with support for all of the supported Steam platforms.

With all of the pieces in place, we started hearing rumblings about the “SteamBox” earlier this year, with most people assuming that Valve would put together something akin to a gaming console with a predefined set of hardware. This week, Valve has released additional details about what they’re planning, and it’s a move that will definitely shake up the gaming industry. Valve is taking a three-pronged approach, and they released some information about each aspect over the course of the past week: SteamOS, Steam Machines, and the Steam Controller. Many details are not yet finalized, but let’s quickly go over what we do know.

The Triple Header

Perhaps the most interesting aspect of the announcement is that Valve will be releasing a new operating system, SteamOS. Similar to Google’s Android and Chrome OS, SteamOS will be based on Linux, and obviously there will be a lot of tuning to make SteamOS work well as a living room OS. Valve specifically mentions support for in-home streaming; music, TV, and movie services; and family options to allow the sharing of games between Steam profiles. Valve is also promising full compatibility with the current ~3000 titles available on Steam, which presumably means that Valve will be doing something similar to WINE (Wine Is Not Emulator) for those titles that don’t have native Linux support. Needless to say, it comes as no surprise that recently both NVIDIA and AMD made note of the fact that they will be doing additional work to improve Linux driver support moving forward.

Other aspects of SteamOS basically build off of everything that makes Steam as a platform so attractive to many users. All of your games that you purchased – any time in the history of Steam – are available anywhere you log in. Your friends list comes with you, save games and settings are stored in the cloud, and the Steam Workshop provides a wealth of user created content. Steam is currently available in 185 countries and 25 languages, and as anyone that has used Steam since it first debuted a decade ago can attest, Valve is constantly looking on ways to improve the platform. SteamOS may be the next evolution of Steam as a platform, but of course Valve will continue to support other operating systems as well.

The second bullet point is perhaps the easiest to understand: the Steam Machines are preconfigured hardware solutions from various partners that will run SteamOS. We don’t know precisely what the hardware will be, but all signs point to it being mostly off-the-shelf hardware that you could use in building any modern PC. There will likely be entry-level hardware, midrange hardware, and high-end solutions that cover a range of price points and performance.

Valve will be releasing a prototype Steam Machine to 300 beta testers over the coming months, selected more or less randomly from applications received before October 25. We’ll know more about the precise hardware configuration Valve is using in a month or so. It will also be possible to download and install SteamOS on your own (though we're not sure when SteamOS will go public), so at its most basic level a Steam Machine is just any PC that happens to be running SteamOS.

The third and final item that Valve announced is their new Steam Controller. Gamepads are nothing new for gaming consoles, but the Steam Machines are in a somewhat unique position of providing a vast library of games where many titles were not built with gamepads in mind, and certainly not the Steam Controller (which hasn't existed prior to this announcement).

The Steam Controller is significantly different than what we’ve seen with most gamepads. Instead of the usual dual thumbsticks, the Steam Controller includes two high-resolution circular trackpads, which are also clickable. In addition, there will be 16 buttons (with the ability to shift between left or right hand configurations via a software switch) and a touchscreen in the center – though the initial prototype will have four buttons in place of the touchscreen. Valve is also promising improved haptics (i.e. force feedback) via dual linear resonant actuators (small, strong, weighted electromagnets attached to the circular trackpads).

As with SteamOS, Valve is promising full compatibility with the entire Steam library, which means they need a way for their gamepad to work in place of the keyboard and mouse that some titles are going to expect. The combination of a touchscreen, various buttons, and the circular touchpads together provide the necessary platform, and a utility will allow users to customize any game for the new controller. Valve will also be leveraging the power of their Steam Community to allow users to share custom configurations, so similar to NVIDIA's GeForce Experience and AMD's new Gaming Evolved (powered by Raptr), you won't necessarily need to roll your own for each game you play.

Typing It All Together

So that’s the short overview, and as usual the proof is in the eating of the pudding – a pudding that we don’t have yet. Given that digital entertainment is a rapidly growing market, it’s easy to understand why Valve would be interested in moving beyond Steam in its current form to something that can compete with game consoles (and perhaps even Android and iOS at some point, though Valve makes no mention of such a use case right now). SteamOS will be available for free, both to end users and manufacturers, but it’s interesting that there’s no mention of it being open source – the core OS will continue to be open, naturally, but I suspect all of the custom code that speaks to Valve’s servers will never see the light of day (which is fine by me).

A more cynical perspective might say that there’s nothing particularly new or shocking in this announcement. Sure, we’re getting a new gamepad at some point, and another Linux-based operating system, but if you already have a Windows PC connected to your HDTV and running Steam, this hasn’t really changed the equation much. The major difference is that Microsoft is going to get even more competition from alternative OSes, and as someone that enjoys competition I’m not going to complain. It also means that Valve has the potential to increase their revenue stream, not so much from the hardware side but rather there’s the potential for inexpensive Steam Machines to take over the roles that are currently filled by traditional consoles, and of course every game purchased on a Steam Machine is going to come through Steam. You can almost hear the “ka-ching”!

There’s a difference between Steam Machines and traditional consoles of course – or at least there appears to be. There’s no specific set of hardware being dictated by Valve, which means for better or worse users will still have to deal with customizing graphics settings, resolutions, etc. and developers still need to worry about catering to a wide range of hardware with sometimes radically different levels of performance. On the bright side, it also means that Steam Machines won’t have to last 7 to 10 years between updates.

What I’m most interested in seeing right now is what sort of performance we actually get out of SteamOS, on a variety of hardware. We all know that Windows is a tremendously bloated operating system – just look at the default install size of Windows 7 or Windows 8. However, just because there’s a bunch of extra stuff that we may not use all that much doesn’t mean that Windows as a gaming platform isn’t viable. I haven’t personally done any testing of gaming performance on Windows versus gaming performance on OS X or Linux, but anecdotally Windows performance has been substantially better in nearly every case. Valve has the potential to change the equation; with an OS focused much more on gaming, performance in SteamOS could be competitive or even better than what we see under Windows.

Of course, if I’m right about SteamOS using something similar to WINE, we’re talking about adding additional overhead to DirectX/OpenGL, at least initially. It’s a pretty big stretch to expect better performance from SteamOS when it initially launches in 2014, but down the road we might see some real changes in the status quo. Give NVIDIA and AMD some time to work with Valve, and maybe we’ll see porting of AMD’s Mantle to the platform as well (and NVIDIA’s CUDA, etc.).

Short-term, we have more questions than answers, but this is definitely a bold (if somewhat expected) move from Valve. They’ve gone from creating games to becoming perhaps the largest “game publisher” around, and their next step appears to directly challenge behemoths like Microsoft (on both the Windows and Xbox fronts), Sony, and Nintendo. I’d be lying if I didn’t admit that I would really like to see Valve succeed at altering the gaming system landscape yet again.

As noted above, Valve will be sending out 300 prototype Steam Machines, mostly to randomly selected applicants. You can read details on how to apply for the beta program on the Steam Machines page, which would get you not only the prototype Steam Machine but also the prototype Steam Controller. You need to apply before October 25, and participants will apparently be selected and receive their prototype machines before the end of 2013. (Wish me luck!)

Find: Oculus Rift on when 8K pixels and 30hz isn't enough, and why

The eye is still sensitive to certain detail better than 20/20 levels of accuracy: why do you think people prefer 1200 dpi in printers over 600 dpi? 
 
// published on Ars Technica // visit site

Virtual Perfection: Why 8K resolution per eye isn’t enough for perfect VR

So you want me to squeeze two 8K displays into this space? No problem! Give me a decade or so...

"Without going into a rant, the term 'Retina Display' is garbage, I think."

Palmer Luckey, the founder and creator of the Oculus Rift, is a bit of a perfectionist when it comes to creating the best possible virtual reality experience. So when our recent interview turned toward the ideal future for a head-mounted display—a theoretical "perfect" device that delivers everything he could ever dream of—he did go on a little rant about what we currently consider "indistinguishable" pixels.

"There is a point where you can no longer distinguish individual pixels, but that does not mean that you cannot distinguish greater detail," he said. "You can still see aliasing on lines on a retina display. You can't pick out the pixels, but you can still see the aliasing. Let's say you want to have an image of a piece of hair on the screen. You can't make it real-size... it would still look jaggy and terrible. There's a difference between where you can't see pixels and where you can't make improvements."

Read 13 remaining paragraphs