-
Notifications
You must be signed in to change notification settings - Fork 1k
Execution context #304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execution context #304
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -11,11 +11,11 @@ languages: [ja] | |
|
||
## Introduction | ||
|
||
Futures provide a nice way to reason about performing many operations | ||
in parallel-- in an efficient and non-blocking way. The idea | ||
is simple, a [Future](http://www.scala-lang.org/api/current/index.html#scala.concurrent.Future) | ||
is a sort of a placeholder object that you can create for a result that does not yet exist. | ||
Generally, the result of the `Future` is computed concurrently and can be later collected. | ||
Futures provide a way to reason about performing many operations | ||
in parallel-- in an efficient and non-blocking way. | ||
A [`Future`](http://www.scala-lang.org/api/current/index.html#scala.concurrent.Future) | ||
is a placeholder object for a value that may not yet exist. | ||
Generally, the value of the Future is supplied concurrently and can subsequently be used. | ||
Composing concurrent tasks in this way tends to result in faster, asynchronous, non-blocking parallel code. | ||
|
||
By default, futures and promises are non-blocking, making use of | ||
|
@@ -36,16 +36,16 @@ in the case of an application which requires blocking futures, for an underlying | |
environment to resize itself if necessary to guarantee progress. | ||
--> | ||
|
||
A typical future look like this: | ||
A typical future looks like this: | ||
|
||
val inverseFuture : Future[Matrix] = Future { | ||
val inverseFuture: Future[Matrix] = Future { | ||
fatMatrix.inverse() // non-blocking long lasting computation | ||
}(executionContext) | ||
|
||
|
||
Or with the more idiomatic | ||
Or with the more idiomatic: | ||
|
||
implicit val ec : ExecutionContext = ... | ||
implicit val ec: ExecutionContext = ... | ||
val inverseFuture : Future[Matrix] = Future { | ||
fatMatrix.inverse() | ||
} // ec is implicitly passed | ||
|
@@ -56,29 +56,40 @@ Both code snippets delegate the execution of `fatMatrix.inverse()` to an `Execut | |
|
||
## Execution Context | ||
|
||
Future and Promises revolve around [ExecutionContexts](http://www.scala-lang.org/api/current/index.html#scala.concurrent.ExecutionContext), responsible for executing computations. | ||
Future and Promises revolve around [`ExecutionContext`s](http://www.scala-lang.org/api/current/index.html#scala.concurrent.ExecutionContext), responsible for executing computations. | ||
|
||
An `ExecutionContext` is similar to an [Executor](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Executor.html): it is free to execute computations in a new thread, in a pooled thread or in the current thread (although this is discouraged - more on that below). | ||
An `ExecutionContext` is similar to an [Executor](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Executor.html): | ||
it is free to execute computations in a new thread, in a pooled thread or in the current thread | ||
(although executing the computation in the current thread is discouraged -- more on that below). | ||
|
||
The `scala.concurrent` package comes out of the box with an `ExecutionContext` implementation, a global static thread pool. | ||
It is also possible to adapt an `Executor` into an `ExecutionContext`, and you are of course free to implement your own. | ||
It is also possible to convert an `Executor` into an `ExecutionContext`. | ||
Finally, users are free to extend the `ExecutionContext` trait to implement their own execution contexts, | ||
although this should only be done in rare cases. | ||
|
||
|
||
### The Global Execution Context | ||
|
||
`ExecutionContext.global` is an `ExecutionContext` backed by a [ForkJoinPool](http://docs.oracle.com/javase/tutorial/essential/concurrency/forkjoin.html). It should be sufficient for most situations but requires some care. | ||
A `ForkJoinPool` manages a limited amount of threads (the maximum amount of thread being refered to as parallelism level). That limit can be exceeded in the presence of blocking computation (the pool must be notified though). | ||
`ExecutionContext.global` is an `ExecutionContext` backed by a [ForkJoinPool](http://docs.oracle.com/javase/tutorial/essential/concurrency/forkjoin.html). | ||
It should be sufficient for most situations but requires some care. | ||
A `ForkJoinPool` manages a limited amount of threads (the maximum amount of thread being referred to as *parallelism level*). | ||
The number of concurrently blocking computations can exceed the parallelism level | ||
only if each blocking call is wrapped inside a `blocking` call (more on that below). | ||
Otherwise, there is a risk that the thread pool in the global execution context is starved, | ||
and no computation can proceed. | ||
|
||
By default the `ExecutionContext.global` set the parallelism level of its underlying fork-join pool to the amount of available processors ([Runtime.availableProcessors](http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#availableProcessors%28%29)). | ||
This configuration can be overriden by setting one (or more) of the following VM attribute: | ||
By default the `ExecutionContext.global` sets the parallelism level of its underlying fork-join pool to the amount of available processors | ||
([Runtime.availableProcessors](http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#availableProcessors%28%29)). | ||
This configuration can be overriden by setting one (or more) of the following VM attributes: | ||
|
||
* scala.concurrent.context.minThreads - defaults to `Runtime.availableProcessors` | ||
* scala.concurrent.context.numThreads - can be a number or a multiplier (N) in the form 'xN' ; defaults to `Runtime.availableProcessors` | ||
* scala.concurrent.context.maxThreads - defaults to `Runtime.availableProcessors` | ||
|
||
The parallelism level will be set to `numThreads` as long as it remains within `[minThreads; maxThreads]`. | ||
|
||
As stated above the `ForkJoinPool` can increase the amount of threads beyond its `parallelismLevel` in the presence of blocking computation. As explained in the `ForkJoinPool` API, this is only possible if the pool is explicitly notified: | ||
As stated above the `ForkJoinPool` can increase the amount of threads beyond its `parallelismLevel` in the presence of blocking computation. | ||
As explained in the `ForkJoinPool` API, this is only possible if the pool is explicitly notified: | ||
|
||
import scala.concurrent.Future | ||
import scala.concurrent.forkjoin._ | ||
|
@@ -121,7 +132,9 @@ Fortunately the concurrent package provides a convenient way for doing so: | |
|
||
Note that `blocking` is a general construct that will be discussed more in depth [below](#in_a_future). | ||
|
||
Last but not least, you must remember that the `ForkJoinPool` isn't designed for long lasting blocking operations. Even when notified with `blocking` the pool might not spawn new workers as you would expect, and when new workers are created they can be as many as 32767. | ||
Last but not least, you must remember that the `ForkJoinPool` is not designed for long lasting blocking operations. | ||
Even when notified with `blocking` the pool might not spawn new workers as you would expect, | ||
and when new workers are created they can be as many as 32767. | ||
To give you an idea, the following code will use 32000 threads: | ||
|
||
implicit val ec = ExecutionContext.global | ||
|
@@ -138,9 +151,9 @@ To give you an idea, the following code will use 32000 threads: | |
If you need to wrap long lasting blocking operations we recommend using a dedicated `ExecutionContext`, for instance by wrapping a Java `Executor`. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FJP is a Java Executor so I am unsure how this recommendation will help. Thoughts? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perhaps what was meant here is to create a dedicated execution context just for executing blocking code, so that non-blocking code running on a separate execution context is not influenced negatively. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That was indeed the idea. |
||
|
||
|
||
### Adapting Java Executor | ||
### Adapting a Java Executor | ||
|
||
Using `ExecutionContext.fromExecutor` you can adapt a Java `Executor` into an `ExecutionContext`. | ||
Using the `ExecutionContext.fromExecutor` method you can wrap a Java `Executor` into an `ExecutionContext`. | ||
For instance: | ||
|
||
ExecutionContext.fromExecutor(new ThreadPoolExecutor( /* your configuration */ )) | ||
|
@@ -152,15 +165,20 @@ One might be tempted to have an `ExecutionContext` that runs computations within | |
|
||
val currentThreadExecutionContext = ExecutionContext.fromExecutor( | ||
new Executor { | ||
// Do not do this! | ||
def execute(runnable: Runnable) { runnable.run() } | ||
}) | ||
|
||
This should be avoided as it introduce non-determinism in the execution of your future. | ||
This should be avoided as it introduces non-determinism in the execution of your future. | ||
|
||
Future { doSomething }(ExecutionContext.global).map { doSomethingElse }(currentThreadExecutionContext) | ||
Future { | ||
doSomething | ||
}(ExecutionContext.global).map { | ||
doSomethingElse | ||
}(currentThreadExecutionContext) | ||
|
||
`doSomethingElse` might either execute in `doSomething`'s thread or in the main thread, and therefore be either asynchronous or synchronous. | ||
As explained [here](http://blog.ometer.com/2011/07/24/callbacks-synchronous-and-asynchronous/) a callback shouldn't be both. | ||
The `doSomethingElse` call might either execute in `doSomething`'s thread or in the main thread, and therefore be either asynchronous or synchronous. | ||
As explained [here](http://blog.ometer.com/2011/07/24/callbacks-synchronous-and-asynchronous/) a callback should not be both. | ||
|
||
|
||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is perhaps too detailed, and concerns standing
ForkJoinPool
implementations more than execution contexts. I would omit this snippet.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section is named
The Global Execution Context
. IMO you ought to know how yourExecutionContext.global
behaves. I would agree with you if the underlying implementation of the global EC wasn't aForkJoinPool
.