Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Engineering, Data / ML

Presto® Express: Speeding up Query Processing with Minimal Resources

November 7 / Global
Featured image for Presto® Express: Speeding up Query Processing with Minimal Resources
Image
Figure 1: Uber Presto operational overview.
Image
Figure 2: High-level Presto architecture.
Image
Figure 3: Queries throttled due to consumer and user limitations.
Image
Figure 4: Queries throttled due to cluster availability.
Image
Figure 5: Interactive query latency.
Image
Figure 6: Batch query latency.
Image
Figure 7: Confusion matrix of predictions for express queries. 
Image
Figure 8: Experiment result.
Image
Figure 9: Pinot query determines if a query is express. 
Image
Figure 10: Express query latency.
Image
Figure 11: High-level architecture of initial Presto express design.
Image
Figure 12: Daily CPU usage of each cluster.
Image
Figure 13: Daily query count of each cluster.
Image
Figure 14: High-level architecture of Presto express final design.
Image
Figure 15: Comparing query count of express and non-express queries in the on-prem batch low tier. 
Image
Figure 16: Comparing P90 queuing time for the express and non-express queries.
Image
Figure 17: Comparing the P90 runtime for ‌express and non-express queries.
Mingjia Hang

Mingjia Hang

Mingjia Hang is a Senior Software Engineer at Uber. She’s been working on enhancing the Presto ecosystem and developing new connectors, including the Pinot Datalake connector.

Gurmeet Singh

Gurmeet Singh

Gurmeet Singh is a Staff Software engineer at Uber and Tech Lead on the Query Analytics Ecosystem.

Posted by Mingjia Hang, Gurmeet Singh