Find: AMD launches first integrated cpu/gpu

With mantle

*** 
 
 // published on AnandTech // visit site
AMD Kaveri APU Launch Details: Desktop, January 14th

Kicking off today is AMD’s annual developer conference, which now goes by the name APU13. There will be several APU/CPU related announcements coming out of the show this week, but we’ll start with what’s likely to be the most interesting for our regular readers: the launch date for AMD’s Kaveri APU.

First and foremost, AMD has confirmed that Kaveri will be shipping in Q4’13, with a launch/availability date of January 14th, 2014. For those of you keeping track of your calendars, this is the week after CES 2014, with AMD promising further details on the Kaveri launch for CES.

Second of all, we have confirmation on what the highest shipping APU configuration will be. Kaveri will have up to 4 CPU core (2 modules), which will be based on AMD’s latest revision of their desktop CPU architecture, Steamroller. Meanwhile the GPU will be composed of 8 GCN 1.1 CUs, which would put the SP count at 512 SPs (this would be equivalent to today's desktop Radeon HD 7750). Furthermore AMD is throwing around a floating point performance number – 856 GFLOPS – which thanks to some details found in AMD's footnotes by PCWorld gives us specific clockspeeds and even a product name. A10-7850K CPU clockspeed 3.7GHz, GPU clockspeed 720MHz.

Third, in a departure from how AMD launched Trinity and Richland, Kaveri will be coming to the desktop first. The January 14th date is for the availability of desktop socket FM2+ Kaveri APUs, with server and mobile APUs to follow (these are presumably some of the CES details to come). Pricing and specific SKUs will of course be announced at a later time, and there wasn’t any clarification on whether this was just for OEM hardware, or if we’ll be seeing retail CPUs too.

Finally, AMD has confirmed on the GPU side that Kaveri will be shooting for feature parity with AMD’s latest discrete GPUs, by supporting many of the same features. Specifically, TrueAudio will be making an appearance on Kaveri, bringing AMD’s dedicated audio processing block to their APUs as well as their GPUs. On the discrete GPUs this is a move that was mostly about functionality, but on Kaveri it should take on a second role due to the fact that it’s exactly the kind of CPU-constrained environment for which having dedicated hardware will be a boon. Furthermore, AMD has also confirmed that their new low-level API, Mantle, will also be supported on Kaveri – it is after all a GCN based GPU.

For AMD Kaveri is going to be a big deal; likely the biggest CPU/APU launch for the company in quite some time. Since the acquisition of ATI all the way back in 2006 this is what the company has been building up to: producing a processor with a highly integrated CPU/GPU that allows both of them to be leveraged nearly-transparently by software. Kaveri is the launch vehicle for HSA both as a specific standard and as a general concept for a PC CPU/APU, so it’s something that everyone inside and outside of AMD will be watching closely.


Find : New wave of online peer review and discussion tools frightens some scientists

Interesting ...
  
*** 
 
// published on Ars Technica // visit site
New wave of online peer review and discussion tools frightens some scientists
Sites like Publons and PubPeer hope to quicken the pace of scientific conversation.

Earlier this year, I wrote a story about a new HIV/Aids detection kit that was under development. Since that time, the same group has published two more papers on the same topic, but questions are starting to be asked about the original research. The questions were so simple that I was pretty embarrassed I didn't spot the problems on my own.

But I wouldn't have gotten even that far were it not for the new directions that peer review and social media are taking science. I was alerted to the problems by twitter user @DaveFernig, pointing me to a discussion about the paper on PubPeer.

Before getting to that, let's recap what impressed me about the HIV detection paper. It achieved a couple of things that made it stand out from a veritable truckload of similar proof-of-principle experiments. The test was very sensitive—so sensitive that it could detect viral loads below that of the standard test and may even reach single molecule sensitivity. When someone uses single-molecule sensitivity, I tend to get all hot and bothered and all my critical thinking faculties vanish for a while.

Read 22 remaining paragraphs

Find: Finally, a modular phone architecture - Motorola's Project Ara

A great idea, for google, motorola and everyone. Will set motorola apart from the crowd, which it sorely needs, and will set us all on the path toward cheaper more renewable mobile devices, which we all sorely need. Improves experience with a new way for users to express themselves through customization. 

Phones are small enough now that I think the extra space modularity requires won't be a serious problem.

Really hope motorola sees this through. 

*** 
 
// published on AnandTech // visit site
Motorola's Project Ara: Phonebloks from an OEM

Phonebloks was a campaign that focused upon attracting the interest of OEMs by showing that there was an incredible amount of interest for a modular phone. This was mostly for reasons of reducing electronics waste, the potential for incredible customization, and the potential for reduced upgrade costs associated with the 1-2 year upgrade cycle. As the current model requires the purchase of an entire phone, upgrading a single “module”, or a set of modules that would update the device would reduce the cost of upgrading to the consumer, much like the current desktop PC system of upgrading individual components.

However, at the time it seemed unlikely that such a campaign would ever produce a meaningful result in the industry. Now, it might be less so as Motorola announced Project Ara, a platform that promises the same modularity that the Phonebloks campaign was promoting, and has also partnered with the creator of the Phonebloks campaign for this project.The concept is largely the same, with an endoskeleton and modules that make up the phone. The display, following the Phonebloks concept, is also likely to be its own module. While actual details of the concept are effectively nil, there are still an enormous number of challenges that such a design would face.

The first would be from a purely hardware perspective, as there is an unavoidable tradeoff between volumetric efficiency and modularity in such a design. While modern smartphones are effectively a tight stack of PCB, battery, and display, this adds in an entire interface for each module that connects them together. This means that the memory module would effectively go from the size of an average eMMC chip to around a full-size SD card due to the need for a durable interface that would connect it to the rest of the phone. This is most readily seen by the differences between the international and Korean LG G2, as the international variant has a ~15% larger battery by virtue of the sealed design that allowed for LG Chemicon’s curved battery pack with thinner walls to allow for more battery capacity.

The second issue in this case would be regulatory, as the FCC only tests single configurations for approval. Such a design would be incredibly challenging to get approval for as there could easily be unpredictable RF behavior from unexpected behavior from a specific setup of modules, or issues with the endoskeleton portion because the modules aren't all part of a single PCB that is unlikely to suffer issues with short circuits or other connection issues, while a modular design would face such challenges.

The final major issue is that of history, as the failure of Intel’s Whitebook initiative from 2006 makes it much harder to see a similar initiative succeeding in the smartphone space. As the Whitebook initiative promised a DIY, modular laptop, much like Phonebloks and Project Ara, and failed due to the rise of completely integrated laptop designs such as the Apple rMBP line, it seems unlikely that such a project would succeed without significant compromise, either in modularity or in competitiveness with the more integrated smartphones. While laptops like the rMBP are effectively impossible for the user to repair, much less open, they have become incredibly popular, and the PC OEMs have followed Apple’s lead in this regard, with consumer demand generally tending towards thinner and lighter laptops, just as the same demand seems to occur in the smartphone space, it is difficult to see such an initiative succeeding. While such initiatives are sure to garner widespread enthusiast support, enthusiasts generally lose their ability to influence the market once a market segment becomes popular with general consumers, as can be seen by the PC industry. However, it remains to be seen whether there is mass-market appeal for such a phone, and it may well be that Motorola is tapping a niche with enormous potential.

Find: Intel Opens Fabs to Competing Chips

Nvidia gpus and motorola arm socs fabbed by intel in the usa. Whooda thunk it? 

Intel is generally one gen ahead in chip size, so those who contract with them will have a power performance advantage. Gpus, for example, could see a sudden jump in speed. Phones could get improved battery life. 

You know that things are bad at intel if they can't use all their fab capacity. 

*** 
 
// published on AnandTech // visit site
Intel Opens Fabs to Competing Chips

In a story posted today on Forbes, Altera has announced that they have entered into a partnership with Intel to have their next generations 64-bit ARM chips produced at Intel’s fabs. Details on precisely what process technology will be used on the upcoming chips are scant, but 22nm would give anyone willing to pay Intel’s price a leg up on the competition, and of course Intel will be moving to 14nm in the future. Really, this announcement would be interesting even if someone were to merely use Intel’s older 32nm fabs.

Intel has apparently inked deals with other companies as well. The Inquirer has this quote from an Intel spokesperson: “We have several design wins thus far and the announcement with Altera in February is an important step towards Intel's overall foundry strategy. Intel will continue to be selective on customers we will enable on our leading edge manufacturing process.”

The key there is the part about being “selective”, but I would guess it’s more a question of whether a company has the volume as well as the money to pay Intel, rather than whether or not Intel would be willing to work with them. This announcement opens the doors for future opportunities – NVIDIA GPUs on Intel silicon would surely be interesting, but given that AMD has gone fabless as well we could also see their future CPUs/GPUs fabbed by Intel.

If we take things back another step, the reality of the semiconductor business is that fabs are expensive to build and maintain. Then they need to be updated every couple of years to the latest technology, or at least new fabs need to be built to stay competitive. If you can’t run your fabs more or less at capacity, you start to fall behind on all fronts. If Intel can more than utilize all of their fabrication assets, it’s a different story, but that era appears to be coming to a close.

The reason for this is pretty simple. We’re seeing a major plateau in terms of the computing performance most people need on a regular basis these days. Give me an SSD and I am perfectly fine running most of my everyday tasks on an old Core 2 Duo or Core 2 Quad. The difference between Bloomfield, Sandy Bridge, Ivy Bridge, and Haswell processors is likewise shrinking each generation – my i7-965X that I’m typing this on continues to run very well, thank you very much! If people and businesses aren’t upgrading as frequently, then you need to find other ways to keep your fabs busy, and selling production to other companies is the low hanging fruit.

Regardless of the reasons behind the move, this marks a new era in Intel fabrication history. It will be interesting to see what other chips end up being fabbed at Intel over the next year or two. 

Opp: CIRCUIT Studio logo competition

****
The CIRCUIT Research Studio at the NCSU College of Humanities and Social Sciences is soliciting entries for a competition to design a logo for the new Studio. Winners will receive a U$ 150.00 cash prize, and will have their logo displayed on the Studio website, and all of the Studio communication materials. Logos should be submitted via email to circuit@lists.ncsu.edu as .jpg or .psd, in color and black and white. The deadline for submissions is November 08, 2013.

The CIRCUIT Research Studio is a collaborative research space where CHASS-based faculty and students work on experimental, theory-driven, and cutting-edge research on Digital Media, Gaming, Digital Humanities, and Mobile Media.The technical and production-oriented aspects of the Studio help CHASS faculty and students develop deep connections with STEM units on our campus and beyond, and pursue large-scale, funded research projects.The Studio also enables CHASS faculty and students to pursue a deeper connection to practice in their scholarship and their teaching. In doing so, the Studio bridges the persistent gap between the “two cultures."

Adriana de Souza e Silva, David Rieder, Nick Taylor (CIRCUIT Studio co-directors)

Find: NVIDIA's G-Sync reinvents display

It's like clean air: doesn't look like much, but once you've experienced it firsthand, you can't go back. 

This will start slowly but rapidly become the new standard. 
 
 
// published on AnandTech // visit site
NVIDIA's G-Sync: Attempting to Revolutionize Gaming via Smoothness

Earlier today NVIDIA announced G-Sync, its variable refresh rate technology for displays. The basic premise is simple. Displays refresh themselves at a fixed interval, but GPUs render frames at a completely independent frame rate. The disconnect between the two is one source of stuttering. You can disable v-sync to try and work around it but the end result is at best tearing, but at worst stuttering and tearing.

NVIDIA's G-Sync is a combination of software and hardware technologies that allows a modern GeForce GPU to control a variable display refresh rate on a monitor equipped with a G-Sync module. In traditional setups a display will refresh the screen at a fixed interval, but in a G-Sync enabled setup the display won't refresh the screen until it's given a new frame from the GPU.

NVIDIA demonstrated the technology on 144Hz ASUS panels, which obviously caps the max GPU present rate at 144 fps although that's not a limit of G-Sync. There's a lower bound of 30Hz as well, since anything below that and you'll begin to run into issues with flickering. If the frame rate drops below 30 fps, the display will present duplicates of each frame.

There's a bunch of other work done on the G-Sync module side to deal with some funny effects of LCDs when driven asynchronously. NVIDIA wouldn't go into great detail other than to say that there are considerations that need to be taken into account.

The first native G-Sync enabled monitors won't show up until Q1 next year, however NVIDIA will be releasing the G-Sync board for modding before the end of this year. Initially supporting Asus’s VG248QE monitor, end-users will be able to mod their monitor to install the board, or alternatively professional modders will be selling pre-modified monitors. Otherwise in Q1 of next year ASUS will be selling the VG248QE with the G-Sync board built in for $399, while BenQ, Philips, and ViewSonic are also committing to rolling out their own G-Sync equipped monitors next year too. I'm hearing that NVIDIA wants to try and get the module down to below $100 eventually. The G-Sync module itself looks like this:

There's a controller and at least 3 x 256MB memory devices on the board, although I'm guessing there's more on the back of the board. NVIDIA isn't giving us a lot of detail here so we'll have to deal with just a shot of the board for now.

Meanwhile we do have limited information on the interface itself; G-Sync is designed to work over DisplayPort (since it’s packet based), with NVIDIA manipulating the timing of the v-blank signal to indicate a refresh. Importantly, this indicates that NVIDIA may not be significantly modifying the DisplayPort protocol, which at least cracks open the door to other implementations on the source/video card side.

Although we only have limited information on the technology at this time, the good news is we got a bunch of cool demos of G-Sync at the event today. I'm going to have to describe most of what I saw since it's difficult to present this otherwise. NVIDIA had two identical systems configured with GeForce GTX 760s, both featured the same ASUS 144Hz displays but only one of them had NVIDIA's G-Sync module installed. NVIDIA ran through a couple of demos to show the benefits of G-Sync, and they were awesome.

The first demo was a swinging pendulum. NVIDIA's demo harness allows you to set min/max frame times, and for the initial test case we saw both systems running at a fixed 60 fps. The performance on both systems was identical as was the visual experience. I noticed no stuttering, and since v-sync was on there was no visible tearing either. Then things got interesting.

NVIDIA then dropped the frame rate on both systems down to 50 fps, once again static. The traditional system started to exhibit stuttering as we saw the effects of having a mismatched GPU frame rate and monitor refresh rate. Since the case itself was pathological in nature (you don't always have a constant mismatch between the two), the stuttering was extremely pronounced. The same demo on the g-sync system? Flawless, smooth.

NVIDIA then dropped the frame rate even more, down to an average of around 45 fps but also introduced variability in frame times, making the demo even more realistic. Once again, the traditional setup with v-sync enabled was a stuttering mess while the G-Sync system didn't skip a beat.

Next up was disabling v-sync with hopes of reducing stuttering, resulting in both stuttering (still refresh rate/fps mismatch) and now tearing. The G-Sync system, once again, handled the test case perfectly. It delivered the same smoothness and visual experience as if the we were looking at a game rendering perfectly at a constant 60 fps. It's sort of ridiculous and completely changes the overall user experience. Drops in frame rate no longer have to be drops in smoothness. Game devs relying on the presence of G-Sync can throw higher quality effects at a scene since they don't need to be as afraid of drops in frame rate excursions below 60 fps.

Switching gears NVIDIA also ran a real world demonstration by spinning the camera around Lara Croft in Tomb Raider. The stutter/tearing effects weren't as pronounced as in NVIDIA's test case, but they were both definitely present on the traditional system and completely absent on the G-Sync machine. I can't stress enough just how smooth the G-Sync experience was, it's a game changer.

The combination of technologies like GeForce Experience, having a ton of GPU performance and G-Sync can really work together to deliver a new level of smoothness, image quality and experience in games. We've seen a resurgence of PC gaming over the past few years, but G-Sync has the potential to take the PC gaming experience to a completely new level.

Update: NVIDIA has posted a bit more information about G-Sync, including the specs of the modified Asus VG248QE monitor, and the system requirements.

NVIDIA G-Sync System Requirements
Video Card GeForce GTX 650 Ti Boost or Higher
Display G-Sync Equipped Display
Driver R331.58 or Higher
Operating System Windows 7/8/8.1

 

Spotted: HierarchicalTopics - Visually Exploring Large Text Collections Using Topic Hierarchies


 
 // published on Visualization and Computer Graphics, IEEE Transactions on - new TOC // visit site

HierarchicalTopics: Visually Exploring Large Text Collections Using Topic Hierarchies

Analyzing large textual collections has become increasingly challenging given the size of the data available and the rate that more data is being generated. Topic-based text summarization methods coupled with interactive visualizations have presented promising approaches to address the challenge of analyzing large text corpora. As the text corpora and vocabulary grow larger, more topics need to be generated in order to capture the meaningful latent themes and nuances in the corpora. However, it is difficult for most of current topic-based visualizations to represent large number of topics without being cluttered or illegible. To facilitate the representation and navigation of a large number of topics, we propose a visual analytics system - HierarchicalTopic (HT). HT integrates a computational algorithm, Topic Rose Tree, with an interactive visual interface. The Topic Rose Tree constructs a topic hierarchy based on a list of topics. The interactive visual interface is designed to present the topic content as well as temporal evolution of topics in a hierarchical fashion. User interactions are provided for users to make changes to the topic hierarchy based on their mental model of the topic space. To qualitatively evaluate HT, we present a case study that showcases how HierarchicalTopics aid expert users in making sense of a large number of topics and discovering interesting patterns of topic groups. We have also conducted a user study to quantitatively evaluate the effect of hierarchical topic structure. The study results reveal that the HT leads to faster identification of large number of relevant topics. We have also solicited user feedback during the experiments and incorporated some suggestions into the current version of HierarchicalTopics.

Spotted: User steered LDA

Like pca and mds, LDA often ids topics that don't make sense to people. Steering might help....

*** 
 
 // published on Visualization and Computer Graphics, IEEE Transactions on - new TOC // visit site

UTOPIAN: User-Driven Topic Modeling Based on Interactive Nonnegative Matrix Factorization

Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

Chrome extension uses colored text to speed up online reading

Interesting idea. Wonder if altering lines might actually slow you down?

*** 
 
// published on The Verge - All Posts // visit site
Chrome extension uses colored text to speed up online reading

Lots of apps have offered the promise of reading text faster, but a Chrome extension called Beeline Reader is using an unexpected tool to get there: colored text. Built on top of the Readability code, the extension works by reformatting the text on a page into a single stripped-down column, then color-coding alternating lines of text to ensure readers never get lost. According to a recent study, that's enough to get the average person through a block of text ten percent faster.

While the trick itself is simple, there's a surprising amount of psychological background to it. The developer tells Fast Company that he was inspired by the Stroop Test in psychology, which shows that readers inevitably perceive the color of the text they're...

Continue reading…