Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
MPI Requirements
of the Network Layer
Presented to the OpenFabrics libfabric Working Group
January 21, 2014
Community feedback assembled
by Jeff Squyres, Cisco Systems
Many thanks to the contributors
(in no particular order)
•

ETZ Zurich

•

• Torsten Hoefler

•

•
•
•
•
•

Sandia National Labs
• Ron Brightwell
• Brian Barrett
• Ryan Grant

•

IBM
•
•
•
•

Chulho Kim
Carl Obert
Michael Blocksome
Perry Schmidt

Cisco Systems

•

Jeff Squyres
Dave Goodell
Reese Faucette
Cesare Cantu
Upinder Malhi

Oak Ridge National Labs
• Scott Atchley
• Pavel Shamis

•

Argonne National Labs
• Jeff Hammond

Slide 2
Many thanks to the contributors
(in no particular order)
• Intel
• Sayantan Sur
• Charles Archer

• AMD
• Brad Benton

• Microsoft

• Cray
• Krishna Kandalla

• Fab Tillier

• U. Edinburgh / EPCC

• Mellanox
• Devendar Bureddy

• Dan Holmes

• U. Alabama Birmingham

• SGI

• Tony Skjellum
• Amin Hassani
• Shane Farmer

• Michael Raymond

Slide 3
Quick MPI overview
• High-level abstraction API
• No concept of a connection

• All communication:
• Is reliable
• Has some ordering rules
• Is comprised of typed messages

• Peer address is (communicator, integer) tuple
• I.e., virtualized
Slide 4
Quick MPI overview
• Communication modes
• Blocking and non-blocking (polled completion)
• Point-to-point: two-sided and one-sided
• Collective operations: broadcast, scatter, reduce,
…etc.
• …and others, but those are the big ones

• Asynch. progression is required/strongly desired

• Message buffers are provided by the application
• They are not “special” (e.g., registered)
Slide 5
Quick MPI overview
• MPI specification
• Governed by the MPI Forum standards body
• Currently at MPI-3.0

• MPI implementations
• Software + hardware implementation of the spec
• Some are open source, some are closed source
• Generally don’t care about interoperability (e.g., wire
protocols)

Slide 6
MPI is a large community
• Community feedback represents union of:
• Different viewpoints
• Different MPI implementations
• Different hardware perspectives

• …and not all agree with each other
• For example…

Slide 7
Different MPI camps
Those who want
high level interfaces

Those who want
low level interfaces

• Do not want to see
memory registration

• Want to have good
memory registration
infrastructure

• Want tag matching

• Want direct access to
hardware capabilities

• E.g., PSM
• Trust the network layer to
do everything well under
the covers

• Want to fully implement
MPI interfaces themselves
• Or, the MPI implementers
are the kernel / firmware
/hardware developers
Slide 8
Be careful what you ask for…
• …because you just got it

• Members of the MPI
Forum would like to be
involved in the libfabric
design on an ongoing
basis
• Can we get an MPI
libfabric listserv?

Slide 9
This is as far as we got
On 21 Jan 2014
Basic things MPI needs
• Messages (not streams)
• Efficient API
• Allow for low latency / high bandwidth
• Low number of instructions in the critical path
• Enable “zero copy”

• Separation of local action initiation and completion
• One-sided (including atomics) and two-sided
semantics
• No requirement for communication buffer alignment
Slide
11
Basic things MPI needs
• Asynchronous progress independent of API calls
• Preferably via dedicated hardware

• Scalable communications with millions of peers
• Think of MPI as a fully-connected model
(even though it usually isn’t implemented that way)

• Today, runs with 3 million MPI processes in a job

Slide
12
Things MPI likes in verbs
• (all the basic needs from previous slide)
• Different modes of communication
• Reliable vs. unreliable
• Scalable connectionless communications (i.e., UD)

• Specify peer read/write address (i.e., RDMA)
• RDMA write with immediate (*)
• …but we want more (more on this later)

Slide
13
Things MPI likes in verbs
• Ability to re-use (short/inline) buffers immediately
• Polling and OS-native/fd-based blocking QP
modes
• Discover devices, ports, and their capabilities (*)
• …but let’s not tie this to a specific hardware model

• Scatter / gather lists for sends

• Atomic operations (*)
• …but we want more (more on this later)
Slide
14
Things MPI likes in verbs
• Can have multiple
consumers in a single
process
• API handles are
independent of each other

Slide
15
Things MPI likes in verbs
• Verbs does not:
• Require collective initialization across multiple
processes
• Require peers to have the same process image
• Restrict completion order vs. delivery order
• Restrict source/target address region (stack, data,
heap)
• Require a specific wire protocol (*)
• …but it does impose limitations, e.g., 40-byte GRH UD
header
Slide
16
Things MPI likes in verbs
• Ability to connect to “unrelated” peers
• Cannot access peer (memory) without permission
• Cleans up everything upon process termination
• E.g., kernel and hardware resources are released

Slide
17
Other things MPI wants
(described as verbs
improvements)
• MTU is an int (not an enum)
• Specify timeouts to connection requests
• …or have a CM that completes connections
asynchronously

• All operations need to be non-blocking, including:
• Address handle creation
• Communication setup / teardown
• Memory registration / deregistration
Slide
18
Other things MPI wants
(described as verbs
improvements)
• Specify buffer/length as function parameters
• Specified as struct requires extra memory accesses
• …more on this later

• Ability to query how many credits currently
available in a QP
• To support actions that consume more than one
credit

• Remove concept of “queue pair”
• Have standalone send channels and receive
channels
Slide
19
Other things MPI wants
(described as verbs
improvements)
• Completion at target for an RDMA write
• Have ability to query if loopback communication is
supported
• Clearly delineate what functionality must be
supported vs. what is optional
• Example: MPI provides (almost) the same
functionality everywhere, regardless of hardware /
platform
• Verbs functionality is wildly different for each provider
Slide
20
Stop for today
Still consolidating more MPI community feedback

More Related Content

2014 01-21-mpi-community-feedback

  • 1. MPI Requirements of the Network Layer Presented to the OpenFabrics libfabric Working Group January 21, 2014 Community feedback assembled by Jeff Squyres, Cisco Systems
  • 2. Many thanks to the contributors (in no particular order) • ETZ Zurich • • Torsten Hoefler • • • • • • Sandia National Labs • Ron Brightwell • Brian Barrett • Ryan Grant • IBM • • • • Chulho Kim Carl Obert Michael Blocksome Perry Schmidt Cisco Systems • Jeff Squyres Dave Goodell Reese Faucette Cesare Cantu Upinder Malhi Oak Ridge National Labs • Scott Atchley • Pavel Shamis • Argonne National Labs • Jeff Hammond Slide 2
  • 3. Many thanks to the contributors (in no particular order) • Intel • Sayantan Sur • Charles Archer • AMD • Brad Benton • Microsoft • Cray • Krishna Kandalla • Fab Tillier • U. Edinburgh / EPCC • Mellanox • Devendar Bureddy • Dan Holmes • U. Alabama Birmingham • SGI • Tony Skjellum • Amin Hassani • Shane Farmer • Michael Raymond Slide 3
  • 4. Quick MPI overview • High-level abstraction API • No concept of a connection • All communication: • Is reliable • Has some ordering rules • Is comprised of typed messages • Peer address is (communicator, integer) tuple • I.e., virtualized Slide 4
  • 5. Quick MPI overview • Communication modes • Blocking and non-blocking (polled completion) • Point-to-point: two-sided and one-sided • Collective operations: broadcast, scatter, reduce, …etc. • …and others, but those are the big ones • Asynch. progression is required/strongly desired • Message buffers are provided by the application • They are not “special” (e.g., registered) Slide 5
  • 6. Quick MPI overview • MPI specification • Governed by the MPI Forum standards body • Currently at MPI-3.0 • MPI implementations • Software + hardware implementation of the spec • Some are open source, some are closed source • Generally don’t care about interoperability (e.g., wire protocols) Slide 6
  • 7. MPI is a large community • Community feedback represents union of: • Different viewpoints • Different MPI implementations • Different hardware perspectives • …and not all agree with each other • For example… Slide 7
  • 8. Different MPI camps Those who want high level interfaces Those who want low level interfaces • Do not want to see memory registration • Want to have good memory registration infrastructure • Want tag matching • Want direct access to hardware capabilities • E.g., PSM • Trust the network layer to do everything well under the covers • Want to fully implement MPI interfaces themselves • Or, the MPI implementers are the kernel / firmware /hardware developers Slide 8
  • 9. Be careful what you ask for… • …because you just got it • Members of the MPI Forum would like to be involved in the libfabric design on an ongoing basis • Can we get an MPI libfabric listserv? Slide 9
  • 10. This is as far as we got On 21 Jan 2014
  • 11. Basic things MPI needs • Messages (not streams) • Efficient API • Allow for low latency / high bandwidth • Low number of instructions in the critical path • Enable “zero copy” • Separation of local action initiation and completion • One-sided (including atomics) and two-sided semantics • No requirement for communication buffer alignment Slide 11
  • 12. Basic things MPI needs • Asynchronous progress independent of API calls • Preferably via dedicated hardware • Scalable communications with millions of peers • Think of MPI as a fully-connected model (even though it usually isn’t implemented that way) • Today, runs with 3 million MPI processes in a job Slide 12
  • 13. Things MPI likes in verbs • (all the basic needs from previous slide) • Different modes of communication • Reliable vs. unreliable • Scalable connectionless communications (i.e., UD) • Specify peer read/write address (i.e., RDMA) • RDMA write with immediate (*) • …but we want more (more on this later) Slide 13
  • 14. Things MPI likes in verbs • Ability to re-use (short/inline) buffers immediately • Polling and OS-native/fd-based blocking QP modes • Discover devices, ports, and their capabilities (*) • …but let’s not tie this to a specific hardware model • Scatter / gather lists for sends • Atomic operations (*) • …but we want more (more on this later) Slide 14
  • 15. Things MPI likes in verbs • Can have multiple consumers in a single process • API handles are independent of each other Slide 15
  • 16. Things MPI likes in verbs • Verbs does not: • Require collective initialization across multiple processes • Require peers to have the same process image • Restrict completion order vs. delivery order • Restrict source/target address region (stack, data, heap) • Require a specific wire protocol (*) • …but it does impose limitations, e.g., 40-byte GRH UD header Slide 16
  • 17. Things MPI likes in verbs • Ability to connect to “unrelated” peers • Cannot access peer (memory) without permission • Cleans up everything upon process termination • E.g., kernel and hardware resources are released Slide 17
  • 18. Other things MPI wants (described as verbs improvements) • MTU is an int (not an enum) • Specify timeouts to connection requests • …or have a CM that completes connections asynchronously • All operations need to be non-blocking, including: • Address handle creation • Communication setup / teardown • Memory registration / deregistration Slide 18
  • 19. Other things MPI wants (described as verbs improvements) • Specify buffer/length as function parameters • Specified as struct requires extra memory accesses • …more on this later • Ability to query how many credits currently available in a QP • To support actions that consume more than one credit • Remove concept of “queue pair” • Have standalone send channels and receive channels Slide 19
  • 20. Other things MPI wants (described as verbs improvements) • Completion at target for an RDMA write • Have ability to query if loopback communication is supported • Clearly delineate what functionality must be supported vs. what is optional • Example: MPI provides (almost) the same functionality everywhere, regardless of hardware / platform • Verbs functionality is wildly different for each provider Slide 20
  • 21. Stop for today Still consolidating more MPI community feedback

Editor's Notes

  1. Feedback from call:We like RDMA write with immediate… but 4 bytes isn’t enoughRe-state last bullet better: want completion on peer for an RDMA write
  2. Replaced “inline messages” with “ability to re-use buffer immediately”
  3. Replaced MPI + PGAS example with “API handles are independent of each other”Split off into its own slide
  4. Specifically mentioned the 40-byte GHR UD header
  5. Translated “Multiple PDs per process” to “ability to connect to ‘unrelated’ peers”
  6. After all:- Added 1st bullet as reaction to feedback from slide 12