DELIVER ASSEMBLY-LINE THROUGHPUT TO DATA-INTENSIVE WORKLOADS WITHOUT ENDLESS ROUNDTRIPS
The Speedata APU optimizes the flow of data through a configurable pipeline of compute elements. These elements are reconfigured for the incoming workload, and then the data records flow through them millions of times without having to fetch and decode instructions in every cycle — a capacity bottleneck that typically makes CPUs and GPUs unfit for analytics.
The APU uniquely decompresses, decodes, and processes millions (or even billions) of records from Parquet or ORC files per second, eliminating the bottlenecks created by other chips that have to write and store intermediate data back to memory.
ACCELERATE I/O ACROSS COLUMNAR FILE FORMATS
ACCELERATE QUERIES FOR A HUGE VARIETY OF DATA
While CPUs and GPUs don't handle branch divergence or variable field lengths well, the APU is designed to execute a wide variety of tasks in parallel and handle any data type and field length found in the database.
CODELESSLY INTEGRATE INTO EXISTING WORKLOADS
The APU automatically intercepts the work that was previously going to the CPU and codelessly reroutes it to accelerate hardware performance with minimal overhead. This unburdens data engineers from having to migrate their workloads and manage testing and debugging, just to achieve modest speedups from processors not designed for analytics.
WHY SPEEDATA APU?
Boost top line
Accelerate 100x faster time to insight
Easily answer previously impossible questions
Pre-process data to unlock value from AI workloads
Process more data with a fraction of the servers
Cut data center footprint and energy costs up to 95%
Eliminate opportunity cost of biz-critical projects
Turn long-running jobs into interactive workflows
Rapidly test new ideas, expand, and refresh data
Unburden infra and IT teams from hardware maintenance