A Promise is an object representing the eventual completion or failure of an asynchronous operation. Since most people are consumers of already-created promises, this guide will explain consumption of returned promises before explaining how to create them. Essentially, a promise is a returned object to which you attach callbacks, instead of passing callbacks into a function. Imagine a function, createAudioFileAsync(), which asynchronously generates a sound file given a configuration record and two callback functions, one called if the audio file is successfully created, and the other called if an error occurs. Here's some code that uses createAudioFileAsync():
function successCallback(result) {
console.log("Audio file ready at URL: " + result);
}
function failureCallback(error) {
console.error("Error generating audio file: " + error);
}
createAudioFileAsync(audioSettings, successCallback, failureCallback);
If createAudioFileAsync() were rewritten to return a promise, you would attach your callbacks to it instead:
createAudioFileAsync(audioSettings).then(successCallback, failureCallback);
This convention has several advantages. We will explore some here.
Unlike old-fashioned passed-in callbacks, a promise comes with some guarantees:
- Callbacks added with then() will never be invoked before the completion of the current run of the JavaScript event loop.
- These callbacks will be invoked even if they were added after the success or failure of the asynchronous operation that the promise represents
- Multiple callbacks may be added by calling then() several times. They will be invoked one after another, in the order in which they were inserted.
One of the great things about using promises is chaining.
A common need is to execute two or more asynchronous operations back to back, where each subsequent operation starts when the previous operation succeeds, with the result from the previous step. We accomplish this by creating a promise chain. Here's the magic: the then() function returns a new promise, different from the original:
const promise = doSomething();
const promise2 = promise.then(successCallback, failureCallback);
or
const promise2 = doSomething().then(successCallback, failureCallback);
This second promise (promise2) represents the completion not just of doSomething(), but also of the successCallback or failureCallback you passed in, which can be other asynchronous functions returning a promise. When that's the case, any callbacks added to promise2 get queued behind the promise returned by either successCallback or failureCallback. Basically, each promise represents the completion of another asynchronous step in the chain. In the old days, doing several asynchronous operations in a row would lead to the classic callback pyramid of doom:
doSomething(function(result) {
doSomethingElse(result, function(newResult) {
doThirdThing(newResult, function(finalResult) {
console.log('Got the final result: ' + finalResult);
}, failureCallback);
}, failureCallback);
}, failureCallback);
With modern functions, we attach our callbacks to the returned promises instead, forming a promise chain:
doSomething()
.then(function(result) {
return doSomethingElse(result);
})
.then(function(newResult) {
return doThirdThing(newResult);
})
.then(function(finalResult) {
console.log('Got the final result: ' + finalResult);
})
.catch(failureCallback);
The arguments to then are optional, and catch(failureCallback) is short for then(null, failureCallback). You might see this expressed with arrow functions instead:
doSomething()
.then(result => doSomethingElse(result))
.then(newResult => doThirdThing(newResult))
.then(finalResult => {
console.log(`Got the final result: ${finalResult}`);
})
.catch(failureCallback);
Important: Always return results, otherwise callbacks won't catch the result of a previous promise (with arrow functions () => x is short for () => { return x; }).
You might recall seeing failureCallback three times in the pyramid of doom earlier, compared to only once at the end of the promise chain:
doSomething()
.then(result => doSomethingElse(result))
.then(newResult => doThirdThing(newResult))
.then(finalResult => console.log(`Got the final result: ${finalResult}`))
.catch(failureCallback);
If there's an exception, the browser will look down the chain for .catch() handlers or onRejected. This is very much modeled after how synchronous code works:
try {
const result = syncDoSomething();
const newResult = syncDoSomethingElse(result);
const finalResult = syncDoThirdThing(newResult);
console.log(`Got the final result: ${finalResult}`);
} catch(error) {
failureCallback(error);
}
This symmetry with asynchronous code culminates in the async/await syntactic sugar in ECMAScript 2017:
async function foo() {
try {
const result = await doSomething();
const newResult = await doSomethingElse(result);
const finalResult = await doThirdThing(newResult);
console.log(`Got the final result: ${finalResult}`);
} catch(error) {
failureCallback(error);
}
}
It builds on promises, e.g. doSomething() is the same function as before. Promises solve a fundamental flaw with the callback pyramid of doom, by:
- catching all errors
- thrown exceptions
- throw programming errors
This is essential for functional composition of asynchronous operations.
- rejectionhandled
Sent when a promise is rejected, after that rejection has been handled by the executor's reject function. - unhandledrejection
Sent when a promise is rejected but there is no rejection handler available.
In both cases, the event (of type PromiseRejectionEvent) has as members a promise property indicating the promise that was rejected, and a reason property that provides the reason given for the promise to be rejected. These make it possible to offer fallback error handling for promises, as well as to help debug issues with your promise management. These handlers are global per context, so all errors will go to the same event handlers, regardless of source.
One case of special usefulness: when writing code for Node.js, it's common that modules you include in your project may have unhandled rejected promises, logged to the console by the Node.js runtime. You can capture them for analysis and handling by your code—or just to avoid having them cluttering up your output—by adding a handler for the Node.js unhandledRejection event (notice the difference in capitalization of the name), like this:
process.on("unhandledRejection", (reason, promise) => {
/* You might start here by adding code to examine the
* "promise" and "reason" values. */
});
For Node.js, to prevent the error from being logged to the console (the default action that would otherwise occur), adding that process.on() listener is all that necessary; there's no need for an equivalent of the browser runtime’s preventDefault() method.
However, if you add that process.on listener but don't also have code within it to handle rejected promises, they will just be dropped on the floor and silently ignored. So ideally, you should add code within that listener to examine each rejected promise and make sure it was not caused by an actual code bug.
A Promise can be created from scratch using its constructor. This should be needed only to wrap old APIs. In an ideal world, all asynchronous functions would already return promises. Unfortunately, some APIs still expect success and/or failure callbacks to be passed in the old way. The most obvious example is the setTimeout() function:
setTimeout(() => saySomething("10 seconds passed"), 10*1000);
Mixing old-style callbacks and promises is problematic. If saySomething() fails or contains a programming error, nothing catches it. setTimeout is to blame for this. Luckily we can wrap setTimeout in a promise. Best practice is to wrap problematic functions at the lowest possible level, and then never call them directly again:
const wait = ms => new Promise(resolve => setTimeout(resolve, ms));
wait(10*1000).then(() => saySomething("10 seconds")).catch(failureCallback);
Basically, the promise constructor takes an executor function that lets us resolve or reject a promise manually. Since setTimeout() doesn't really fail, we left out reject in this case.
Promise.all([func1(), func2(), func3()])
.then(([result1, result2, result3]) => { /* use result1, result2 and result3 */ });
Sequential composition is possible using some clever JavaScript:
[func1, func2, func3].reduce((p, f) => p.then(f), Promise.resolve())
.then(result3 => { /* use result3 */ });
Basically, we reduce an array of asynchronous functions down to a promise chain equivalent to:
Promise.resolve().then(func1).then(func2).then(func3);
This can be made into a reusable compose function, which is common in functional programming:
const applyAsync = (acc,val) => acc.then(val);
const composeAsync = (...funcs) => x => funcs.reduce(applyAsync, Promise.resolve(x));
The composeAsync() function will accept any number of functions as arguments, and will return a new function
that accepts an initial value to be passed through the composition pipeline:
const transformData = composeAsync(func1, func2, func3);
const result3 = transformData(data);
In ECMAScript 2017, sequential composition can be done more with async/await:
let result;
for (const f of [func1, func2, func3]) {
result = await f(result);
}
/* use last result (i.e. result3) */
For other advantages of promises below, you can refer to MDN documentation
- Timing
- Nesting
- Common mistakes
- When promises and tasks collide