Increasingly, a wide range of computer applications, from phone apps through robots, behavior prediction for marketing, and scientific computations, operate on data from sensors. At the same time, all recent increases in computer performance has been coming through increasing parallelism. Sensor data is usually to be in the form of time series, where the ordering of values and their temporal and spatial locations are significant. This fact imposes sequential dependencies on algorithms used to process sensor data. This talk will describe new and emerging features of processors, including wider vector instructions and on-chip graphics processors and field programmable gate array (FPGA) fabric, that provide additional opportunities for parallel speedup beyond that of traditional multicore CPUs. I will discuss research into several approaches to handling the serial data dependencies in sensor data to yield effective parallel algorithms that take advantage of these new processor features.
Lee Barford is Master Scientist at the Agilent Technologies Measurement Research Laboratory and Adjunct Professor of Computer Science and Engineering at the University of Nevada, Reno. After earning a PhD in Computer Science from Cornell University he joined Hewlett-Packard Laboratories and then Agilent Laboratories. At both of those companies his research has focussed on creating innovative software to make engineers in other disciplines---electronics engineering, mechanical engineering, and manufacturing engineering---more effective. His work has been used to improve R&D productivity and reduce manufacturing cost in the leading companies in the technology and transportation industries, including Apple, Boeing, Cisco, Ford, HP, Microsoft, and NASA. He is the inventor or co-inventor of more than 60 patents. His application for improving automatic diagnosis of faults in very large electronic systems, Fault Detective Test Analyzer, won Electronic Design News' award for Best Software of the Year.
No recent activity