Module Picos

Framework for interoperable effects based concurrency.

This is essentially an interface between schedulers and other elements that need to communicate with a scheduler. Perhaps an enlightening analogy is to say that this is the POSIX of effects based schedulers.

ℹī¸ Picos, i.e. this module, is not intended to be an application level concurrent programming library or framework. If you are looking for a library or framework for programming concurrent applications, then this module is probably not what you are looking for.

The architecture of Picos

The core concepts of Picos are

that are implemented in terms of the effects

that can be used to implement many kinds of higher level concurrent programming facilities.

Understanding cancelation

A central idea of Picos is to provide a collection of building blocks for parallelism safe cancelation. Consider the following characteristic example:

Mutex.protect mutex begin fun () ->
  while true do
    Condition.wait condition mutex
  done
end

Assume that the fiber executing the above computation might be canceled, at any point, by another fiber running in parallel. How could that be done both effectively and safely?

Picos allows Mutex and Condition to be implemented such that cancelation may safely take effect at or during calls to Mutex.lock and Condition.wait.

Cancelation in Picos

The Fiber concept in Picos corresponds to an independent thread of execution. A fiber may explicitly forbid or permit the scheduler from propagating cancelation to it. This is important for the implementation of some key concurrent abstractions such as condition variables, where it is necessary to forbid cancelation when the associated mutex is reacquired.

Each fiber has an associated Computation. A computation is something that needs to be completed either by returning a value through it or by canceling it with an exception. To cancel a fiber one cancels the computation associated with the fiber.

Before a computation has been completed, it is also possible to attach a Trigger to the computation and also to later detach the trigger from the computation. A trigger attached to a computation is signaled as the computation is completed.

The Trigger concept in Picos is what allows a fiber to be suspended and later resumed. A fiber can create a trigger, add it to any shared data structure(s), and await for the trigger to be signaled. The await operation, which is implemented by the scheduler, also, in case the fiber permits cancelation, attaches the trigger to the computation of the fiber when it suspends the fiber. This is what allows a fiber to be resumed via cancelation of the computation.

The return value of await tells whether the fiber was resumed normally or due to being canceled and the caller then needs to properly handle either case. After being canceled, depending on the concurrent abstraction being implemented, the caller might need to e.g. remove references to the trigger from the shared data structures, cancel asynchronous IO operations, or transfer ownership of a mutex to the next fiber in the queue of the mutex.

Modules reference

We first open the Picos module

open Picos

and define a simple scheduler for running the examples in this document on OCaml 4

let run main =
  Picos_threaded.run ~forbid:false main

using the basic thread based scheduler and on OCaml 5

let run main =
  Picos_fifos.run ~forbid:false main

using the basic effects based scheduler that come with Picos as samples.

Auxiliary modules

module Exn_bt = Picos_exn_bt

Exceptions with backtraces.

Core modules

Please note that the example code snippets in this documentation may e.g. use the Domain and Unix modules in order to be able to describe Picos concepts in isolation in the absence of a Picos compatible scheduler.

module Trigger : sig ... end

Ability to await for a signal.

module Computation : sig ... end

A cancelable computation.

module Fiber : sig ... end

An independent thread of execution.

module Handler : sig ... end

Handler for the effects based operations of Picos for OCaml 4.