From b83c246505299774e9d3c4cb8cf40c00f2d0a2b2 Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Fri, 14 Jun 2019 00:38:27 -0300 Subject: [PATCH 01/17] Move files --- locale/pt-br/docs/guides/abi-stability.md | 118 ++++ .../guides/anatomy-of-an-http-transaction.md | 430 ++++++++++++ .../docs/guides/backpressuring-in-streams.md | 637 ++++++++++++++++++ .../docs/guides/blocking-vs-non-blocking.md | 148 ++++ .../guides/buffer-constructor-deprecation.md | 281 ++++++++ .../docs/guides/debugging-getting-started.md | 244 +++++++ .../docs/guides/diagnostics-flamegraph.md | 120 ++++ locale/pt-br/docs/guides/domain-postmortem.md | 444 ++++++++++++ .../docs/guides/dont-block-the-event-loop.md | 476 +++++++++++++ locale/pt-br/docs/meta/topics/dependencies.md | 102 +++ 10 files changed, 3000 insertions(+) create mode 100644 locale/pt-br/docs/guides/abi-stability.md create mode 100644 locale/pt-br/docs/guides/anatomy-of-an-http-transaction.md create mode 100644 locale/pt-br/docs/guides/backpressuring-in-streams.md create mode 100644 locale/pt-br/docs/guides/blocking-vs-non-blocking.md create mode 100644 locale/pt-br/docs/guides/buffer-constructor-deprecation.md create mode 100644 locale/pt-br/docs/guides/debugging-getting-started.md create mode 100644 locale/pt-br/docs/guides/diagnostics-flamegraph.md create mode 100644 locale/pt-br/docs/guides/domain-postmortem.md create mode 100644 locale/pt-br/docs/guides/dont-block-the-event-loop.md create mode 100644 locale/pt-br/docs/meta/topics/dependencies.md diff --git a/locale/pt-br/docs/guides/abi-stability.md b/locale/pt-br/docs/guides/abi-stability.md new file mode 100644 index 0000000000000..b010c70a7b0e2 --- /dev/null +++ b/locale/pt-br/docs/guides/abi-stability.md @@ -0,0 +1,118 @@ +--- +title: ABI Stability +layout: docs.hbs +--- + +# ABI Stability + +## Introduction +An Application Binary Interface (ABI) is a way for programs to call functions +and use data structures from other compiled programs. It is the compiled version +of an Application Programming Interface (API). In other words, the headers files +describing the classes, functions, data structures, enumerations, and constants +which enable an application to perform a desired task correspond by way of +compilation to a set of addresses and expected parameter values and memory +structure sizes and layouts with which the provider of the ABI was compiled. + +The application using the ABI must be compiled such that the available +addresses, expected parameter values, and memory structure sizes and layouts +agree with those with which the ABI provider was compiled. This is usually +accomplished by compiling against the headers provided by the ABI provider. + +Since the provider of the ABI and the user of the ABI may be compiled at +different times with different versions of the compiler, a portion of the +responsibility for ensuring ABI compatibility lies with the compiler. Different +versions of the compiler, perhaps provided by different vendors, must all +produce the same ABI from a header file with a certain content, and must produce +code for the application using the ABI that accesses the API described in a +given header according to the conventions of the ABI resulting from the +description in the header. Modern compilers have a fairly good track record of +not breaking the ABI compatibility of the applications they compile. + +The remaining responsibility for ensuring ABI compatibility lies with the team +maintaining the header files which provide the API that results, upon +compilation, in the ABI that is to remain stable. Changes to the header files +can be made, but the nature of the changes has to be closely tracked to ensure +that, upon compilation, the ABI does not change in a way that will render +existing users of the ABI incompatible with the new version. + +## ABI Stability in Node.js +Node.js provides header files maintained by several independent teams. For +example, header files such as `node.h` and `node_buffer.h` are maintained by +the Node.js team. `v8.h` is maintained by the V8 team, which, although in close +co-operation with the Node.js team, is independent, and with its own schedule +and priorities. Thus, the Node.js team has only partial control over the +changes that are introduced in the headers the project provides. As a result, +the Node.js project has adopted [semantic versioning](https://semver.org/). +This ensures that the APIs provided by the project will result in a stable ABI +for all minor and patch versions of Node.js released within one major version. +In practice, this means that the Node.js project has committed itself to +ensuring that a Node.js native addon compiled against a given major version of +Node.js will load successfully when loaded by any Node.js minor or patch version +within the major version against which it was compiled. + +## N-API +Demand has arisen for equipping Node.js with an API that results in an ABI that +remains stable across multiple Node.js major versions. The motivation for +creating such an API is as follows: +* The JavaScript language has remained compatible with itself since its very +early days, whereas the ABI of the engine executing the JavaScript code changes +with every major version of Node.js. This means that applications consisting of +Node.js packages written entirely in JavaScript need not be recompiled, +reinstalled, or redeployed as a new major version of Node.js is dropped into +the production environment in which such applications run. In contrast, if an +application depends on a package that contains a native addon, the application +has to be recompiled, reinstalled, and redeployed whenever a new major version +of Node.js is introduced into the production environment. This disparity +between Node.js packages containing native addons and those that are written +entirely in JavaScript has added to the maintenance burden of production +systems which rely on native addons. + +* Other projects have started to produce JavaScript interfaces that are +essentially alternative implementations of Node.js. Since these projects are +usually built on a different JavaScript engine than V8, their native addons +necessarily take on a different structure and use a different API. Nevertheless, +using a single API for a native addon across different implementations of the +Node.js JavaScript API would allow these projects to take advantage of the +ecosystem of JavaScript packages that has accrued around Node.js. + +* Node.js may contain a different JavaScript engine in the future. This means +that, externally, all Node.js interfaces would remain the same, but the V8 +header file would be absent. Such a step would cause the disruption of the +Node.js ecosystem in general, and that of the native addons in particular, if +an API that is JavaScript engine agnostic is not first provided by Node.js and +adopted by native addons. + +To these ends Node.js has introduced N-API in version 8.6.0 and marked it as a +stable component of the project as of Node.js 8.12.0. The API is defined in the +headers [`node_api.h`][] and [`node_api_types.h`][], and provides a forward- +compatibility guarantee that crosses the Node.js major version boundary. The +guarantee can be stated as follows: + +**A given version *n* of N-API will be available in the major version of +Node.js in which it was published, and in all subsequent versions of Node.js, +including subsequent major versions.** + +A native addon author can take advantage of the N-API forward compatibility +guarantee by ensuring that the addon makes use only of APIs defined in +`node_api.h` and data structures and constants defined in `node_api_types.h`. +By doing so, the author facilitates adoption of their addon by indicating to +production users that the maintenance burden for their application will increase +no more by the addition of the native addon to their project than it would by +the addition of a package written purely in JavaScript. + +N-API is versioned because new APIs are added from time to time. Unlike +semantic versioning, N-API versioning is cumulative. That is, each version of +N-API conveys the same meaning as a minor version in the semver system, meaning +that all changes made to N-API will be backwards compatible. Additionally, new +N-APIs are added under an experimental flag to give the community an opportunity +to vet them in a production environment. Experimental status means that, +although care has been taken to ensure that the new API will not have to be +modified in an ABI-incompatible way in the future, it has not yet been +sufficiently proven in production to be correct and useful as designed and, as +such, may undergo ABI-incompatible changes before it is finally incorporated +into a forthcoming version of N-API. That is, an experimental N-API is not yet +covered by the forward compatibility guarantee. + +[`node_api.h`]: https://github.com/nodejs/node/blob/master/src/node_api.h +[`node_api_types.h`]: https://github.com/nodejs/node/blob/master/src/node_api_types.h diff --git a/locale/pt-br/docs/guides/anatomy-of-an-http-transaction.md b/locale/pt-br/docs/guides/anatomy-of-an-http-transaction.md new file mode 100644 index 0000000000000..289514b9c5537 --- /dev/null +++ b/locale/pt-br/docs/guides/anatomy-of-an-http-transaction.md @@ -0,0 +1,430 @@ +--- +title: Anatomy of an HTTP Transaction +layout: docs.hbs +--- + +# Anatomy of an HTTP Transaction + +The purpose of this guide is to impart a solid understanding of the process of +Node.js HTTP handling. We'll assume that you know, in a general sense, how HTTP +requests work, regardless of language or programming environment. We'll also +assume a bit of familiarity with Node.js [`EventEmitters`][] and [`Streams`][]. +If you're not quite familiar with them, it's worth taking a quick read through +the API docs for each of those. + +## Create the Server + +Any node web server application will at some point have to create a web server +object. This is done by using [`createServer`][]. + +```javascript +const http = require('http'); + +const server = http.createServer((request, response) => { + // magic happens here! +}); +``` + +The function that's passed in to [`createServer`][] is called once for every +HTTP request that's made against that server, so it's called the request +handler. In fact, the [`Server`][] object returned by [`createServer`][] is an +[`EventEmitter`][], and what we have here is just shorthand for creating a +`server` object and then adding the listener later. + +```javascript +const server = http.createServer(); +server.on('request', (request, response) => { + // the same kind of magic happens here! +}); +``` + +When an HTTP request hits the server, node calls the request handler function +with a few handy objects for dealing with the transaction, `request` and +`response`. We'll get to those shortly. + +In order to actually serve requests, the [`listen`][] method needs to be called +on the `server` object. In most cases, all you'll need to pass to `listen` is +the port number you want the server to listen on. There are some other options +too, so consult the [API reference][]. + +## Method, URL and Headers + +When handling a request, the first thing you'll probably want to do is look at +the method and URL, so that appropriate actions can be taken. Node makes this +relatively painless by putting handy properties onto the `request` object. + +```javascript +const { method, url } = request; +``` +> **Note:** The `request` object is an instance of [`IncomingMessage`][]. + +The `method` here will always be a normal HTTP method/verb. The `url` is the +full URL without the server, protocol or port. For a typical URL, this means +everything after and including the third forward slash. + +Headers are also not far away. They're in their own object on `request` called +`headers`. + +```javascript +const { headers } = request; +const userAgent = headers['user-agent']; +``` + +It's important to note here that all headers are represented in lower-case only, +regardless of how the client actually sent them. This simplifies the task of +parsing headers for whatever purpose. + +If some headers are repeated, then their values are overwritten or joined +together as comma-separated strings, depending on the header. In some cases, +this can be problematic, so [`rawHeaders`][] is also available. + +## Request Body + +When receiving a `POST` or `PUT` request, the request body might be important to +your application. Getting at the body data is a little more involved than +accessing request headers. The `request` object that's passed in to a handler +implements the [`ReadableStream`][] interface. This stream can be listened to or +piped elsewhere just like any other stream. We can grab the data right out of +the stream by listening to the stream's `'data'` and `'end'` events. + +The chunk emitted in each `'data'` event is a [`Buffer`][]. If you know it's +going to be string data, the best thing to do is collect the data in an array, +then at the `'end'`, concatenate and stringify it. + +```javascript +let body = []; +request.on('data', (chunk) => { + body.push(chunk); +}).on('end', () => { + body = Buffer.concat(body).toString(); + // at this point, `body` has the entire request body stored in it as a string +}); +``` + +> **Note:** This may seem a tad tedious, and in many cases, it is. Luckily, +there are modules like [`concat-stream`][] and [`body`][] on [`npm`][] which can +help hide away some of this logic. It's important to have a good understanding +of what's going on before going down that road, and that's why you're here! + +## A Quick Thing About Errors + +Since the `request` object is a [`ReadableStream`][], it's also an +[`EventEmitter`][] and behaves like one when an error happens. + +An error in the `request` stream presents itself by emitting an `'error'` event +on the stream. **If you don't have a listener for that event, the error will be +*thrown*, which could crash your Node.js program.** You should therefore add an +`'error'` listener on your request streams, even if you just log it and +continue on your way. (Though it's probably best to send some kind of HTTP error +response. More on that later.) + +```javascript +request.on('error', (err) => { + // This prints the error message and stack trace to `stderr`. + console.error(err.stack); +}); +``` + +There are other ways of [handling these errors][] such as +other abstractions and tools, but always be aware that errors can and do happen, +and you're going to have to deal with them. + +## What We've Got so Far + +At this point, we've covered creating a server, and grabbing the method, URL, +headers and body out of requests. When we put that all together, it might look +something like this: + +```javascript +const http = require('http'); + +http.createServer((request, response) => { + const { headers, method, url } = request; + let body = []; + request.on('error', (err) => { + console.error(err); + }).on('data', (chunk) => { + body.push(chunk); + }).on('end', () => { + body = Buffer.concat(body).toString(); + // At this point, we have the headers, method, url and body, and can now + // do whatever we need to in order to respond to this request. + }); +}).listen(8080); // Activates this server, listening on port 8080. +``` + +If we run this example, we'll be able to *receive* requests, but not *respond* +to them. In fact, if you hit this example in a web browser, your request would +time out, as nothing is being sent back to the client. + +So far we haven't touched on the `response` object at all, which is an instance +of [`ServerResponse`][], which is a [`WritableStream`][]. It contains many +useful methods for sending data back to the client. We'll cover that next. + +## HTTP Status Code + +If you don't bother setting it, the HTTP status code on a response will always +be 200. Of course, not every HTTP response warrants this, and at some point +you'll definitely want to send a different status code. To do that, you can set +the `statusCode` property. + +```javascript +response.statusCode = 404; // Tell the client that the resource wasn't found. +``` + +There are some other shortcuts to this, as we'll see soon. + +## Setting Response Headers + +Headers are set through a convenient method called [`setHeader`][]. + +```javascript +response.setHeader('Content-Type', 'application/json'); +response.setHeader('X-Powered-By', 'bacon'); +``` + +When setting the headers on a response, the case is insensitive on their names. +If you set a header repeatedly, the last value you set is the value that gets +sent. + +## Explicitly Sending Header Data + +The methods of setting the headers and status code that we've already discussed +assume that you're using "implicit headers". This means you're counting on node +to send the headers for you at the correct time before you start sending body +data. + +If you want, you can *explicitly* write the headers to the response stream. +To do this, there's a method called [`writeHead`][], which writes the status +code and the headers to the stream. + +```javascript +response.writeHead(200, { + 'Content-Type': 'application/json', + 'X-Powered-By': 'bacon' +}); +``` + +Once you've set the headers (either implicitly or explicitly), you're ready to +start sending response data. + +## Sending Response Body + +Since the `response` object is a [`WritableStream`][], writing a response body +out to the client is just a matter of using the usual stream methods. + +```javascript +response.write(''); +response.write(''); +response.write('

Hello, World!

'); +response.write(''); +response.write(''); +response.end(); +``` + +The `end` function on streams can also take in some optional data to send as the +last bit of data on the stream, so we can simplify the example above as follows. + +```javascript +response.end('

Hello, World!

'); +``` + +> **Note:** It's important to set the status and headers *before* you start +writing chunks of data to the body. This makes sense, since headers come before +the body in HTTP responses. + +## Another Quick Thing About Errors + +The `response` stream can also emit `'error'` events, and at some point you're +going to have to deal with that as well. All of the advice for `request` stream +errors still applies here. + +## Put It All Together + +Now that we've learned about making HTTP responses, let's put it all together. +Building on the earlier example, we're going to make a server that sends back +all of the data that was sent to us by the user. We'll format that data as JSON +using `JSON.stringify`. + +```javascript +const http = require('http'); + +http.createServer((request, response) => { + const { headers, method, url } = request; + let body = []; + request.on('error', (err) => { + console.error(err); + }).on('data', (chunk) => { + body.push(chunk); + }).on('end', () => { + body = Buffer.concat(body).toString(); + // BEGINNING OF NEW STUFF + + response.on('error', (err) => { + console.error(err); + }); + + response.statusCode = 200; + response.setHeader('Content-Type', 'application/json'); + // Note: the 2 lines above could be replaced with this next one: + // response.writeHead(200, {'Content-Type': 'application/json'}) + + const responseBody = { headers, method, url, body }; + + response.write(JSON.stringify(responseBody)); + response.end(); + // Note: the 2 lines above could be replaced with this next one: + // response.end(JSON.stringify(responseBody)) + + // END OF NEW STUFF + }); +}).listen(8080); +``` + +## Echo Server Example + +Let's simplify the previous example to make a simple echo server, which just +sends whatever data is received in the request right back in the response. All +we need to do is grab the data from the request stream and write that data to +the response stream, similar to what we did previously. + +```javascript +const http = require('http'); + +http.createServer((request, response) => { + let body = []; + request.on('data', (chunk) => { + body.push(chunk); + }).on('end', () => { + body = Buffer.concat(body).toString(); + response.end(body); + }); +}).listen(8080); +``` + +Now let's tweak this. We want to only send an echo under the following +conditions: + +* The request method is POST. +* The URL is `/echo`. + +In any other case, we want to simply respond with a 404. + +```javascript +const http = require('http'); + +http.createServer((request, response) => { + if (request.method === 'POST' && request.url === '/echo') { + let body = []; + request.on('data', (chunk) => { + body.push(chunk); + }).on('end', () => { + body = Buffer.concat(body).toString(); + response.end(body); + }); + } else { + response.statusCode = 404; + response.end(); + } +}).listen(8080); +``` + +> **Note:** By checking the URL in this way, we're doing a form of "routing". +Other forms of routing can be as simple as `switch` statements or as complex as +whole frameworks like [`express`][]. If you're looking for something that does +routing and nothing else, try [`router`][]. + +Great! Now let's take a stab at simplifying this. Remember, the `request` object +is a [`ReadableStream`][] and the `response` object is a [`WritableStream`][]. +That means we can use [`pipe`][] to direct data from one to the other. That's +exactly what we want for an echo server! + +```javascript +const http = require('http'); + +http.createServer((request, response) => { + if (request.method === 'POST' && request.url === '/echo') { + request.pipe(response); + } else { + response.statusCode = 404; + response.end(); + } +}).listen(8080); +``` + +Yay streams! + +We're not quite done yet though. As mentioned multiple times in this guide, +errors can and do happen, and we need to deal with them. + +To handle errors on the request stream, we'll log the error to `stderr` and send +a 400 status code to indicate a `Bad Request`. In a real-world application, +though, we'd want to inspect the error to figure out what the correct status code +and message would be. As usual with errors, you should consult the +[`Error` documentation][]. + +On the response, we'll just log the error to `stderr`. + +```javascript +const http = require('http'); + +http.createServer((request, response) => { + request.on('error', (err) => { + console.error(err); + response.statusCode = 400; + response.end(); + }); + response.on('error', (err) => { + console.error(err); + }); + if (request.method === 'POST' && request.url === '/echo') { + request.pipe(response); + } else { + response.statusCode = 404; + response.end(); + } +}).listen(8080); +``` + +We've now covered most of the basics of handling HTTP requests. At this point, +you should be able to: + +* Instantiate an HTTP server with a request handler function, and have it listen +on a port. +* Get headers, URL, method and body data from `request` objects. +* Make routing decisions based on URL and/or other data in `request` objects. +* Send headers, HTTP status codes and body data via `response` objects. +* Pipe data from `request` objects and to `response` objects. +* Handle stream errors in both the `request` and `response` streams. + +From these basics, Node.js HTTP servers for many typical use cases can be +constructed. There are plenty of other things these APIs provide, so be sure to +read through the API docs for [`EventEmitters`][], [`Streams`][], and [`HTTP`][]. + + + +[`EventEmitters`]: https://nodejs.org/api/events.html +[`Streams`]: https://nodejs.org/api/stream.html +[`createServer`]: https://nodejs.org/api/http.html#http_http_createserver_requestlistener +[`Server`]: https://nodejs.org/api/http.html#http_class_http_server +[`listen`]: https://nodejs.org/api/http.html#http_server_listen_port_hostname_backlog_callback +[API reference]: https://nodejs.org/api/http.html +[`IncomingMessage`]: https://nodejs.org/api/http.html#http_class_http_incomingmessage +[`ReadableStream`]: https://nodejs.org/api/stream.html#stream_class_stream_readable +[`rawHeaders`]: https://nodejs.org/api/http.html#http_message_rawheaders +[`Buffer`]: https://nodejs.org/api/buffer.html +[`concat-stream`]: https://www.npmjs.com/package/concat-stream +[`body`]: https://www.npmjs.com/package/body +[`npm`]: https://www.npmjs.com +[`EventEmitter`]: https://nodejs.org/api/events.html#events_class_eventemitter +[handling these errors]: https://nodejs.org/api/errors.html +[`domains`]: https://nodejs.org/api/domain.html +[`ServerResponse`]: https://nodejs.org/api/http.html#http_class_http_serverresponse +[`setHeader`]: https://nodejs.org/api/http.html#http_response_setheader_name_value +[`WritableStream`]: https://nodejs.org/api/stream.html#stream_class_stream_writable +[`writeHead`]: https://nodejs.org/api/http.html#http_response_writehead_statuscode_statusmessage_headers +[`express`]: https://www.npmjs.com/package/express +[`router`]: https://www.npmjs.com/package/router +[`pipe`]: https://nodejs.org/api/stream.html#stream_readable_pipe_destination_options +[`Error` documentation]: https://nodejs.org/api/errors.html +[`HTTP`]: https://nodejs.org/api/http.html diff --git a/locale/pt-br/docs/guides/backpressuring-in-streams.md b/locale/pt-br/docs/guides/backpressuring-in-streams.md new file mode 100644 index 0000000000000..e77c834c8e47b --- /dev/null +++ b/locale/pt-br/docs/guides/backpressuring-in-streams.md @@ -0,0 +1,637 @@ +--- +title: Backpressuring in Streams +layout: docs.hbs +--- + +# Backpressuring in Streams + +There is a general problem that occurs during data handling called +[`backpressure`][] and describes a buildup of data behind a buffer during data +transfer. When the receiving end of the transfer has complex operations, or is +slower for whatever reason, there is a tendency for data from the incoming +source to accumulate, like a clog. + +To solve this problem, there must be a delegation system in place to ensure a +smooth flow of data from one source to another. Different communities have +resolved this issue uniquely to their programs, Unix pipes and TCP sockets are +good examples of this, and is often times referred to as _flow control_. In +Node.js, streams have been the adopted solution. + +The purpose of this guide is to further detail what backpressure is, and how +exactly streams address this in Node.js' source code. The second part of +the guide will introduce suggested best practices to ensure your application's +code is safe and optimized when implementing streams. + +We assume a little familiarity with the general definition of +[`backpressure`][], [`Buffer`][], and [`EventEmitters`][] in Node.js, as well as +some experience with [`Stream`][]. If you haven't read through those docs, +it's not a bad idea to take a look at the API documentation first, as it will +help expand your understanding while reading this guide. + +## The Problem with Data Handling + +In a computer system, data is transferred from one process to another through +pipes, sockets, and signals. In Node.js, we find a similar mechanism called +[`Stream`][]. Streams are great! They do so much for Node.js and almost every +part of the internal codebase utilizes that module. As a developer, you +are more than encouraged to use them too! + +```javascript +const readline = require('readline'); + +// process.stdin and process.stdout are both instances of Streams +const rl = readline.createInterface({ + input: process.stdin, + output: process.stdout +}); + +rl.question('Why should you use streams? ', (answer) => { + console.log(`Maybe it's ${answer}, maybe it's because they are awesome! :)`); + + rl.close(); +}); +``` + +A good example of why the backpressure mechanism implemented through streams is +a great optimization can be demonstrated by comparing the internal system tools +from Node.js' [`Stream`][] implementation. + +In one scenario, we will take a large file (approximately ~9gb) and compress it +using the familiar [`zip(1)`][] tool. + +``` +$ zip The.Matrix.1080p.mkv +``` + +While that will take a few minutes to complete, in another shell we may run +a script that takes Node.js' module [`zlib`][], that wraps around another +compression tool, [`gzip(1)`][]. + +```javascript +const gzip = require('zlib').createGzip(); +const fs = require('fs'); + +const inp = fs.createReadStream('The.Matrix.1080p.mkv'); +const out = fs.createWriteStream('The.Matrix.1080p.mkv.gz'); + +inp.pipe(gzip).pipe(out); +``` + +To test the results, try opening each compressed file. The file compressed by +the [`zip(1)`][] tool will notify you the file is corrupt, whereas the +compression finished by [`Stream`][] will decompress without error. + +Note: In this example, we use `.pipe()` to get the data source from one end +to the other. However, notice there are no proper error handlers attached. If +a chunk of data were to fail to be properly received, the `Readable` source or +`gzip` stream will not be destroyed. [`pump`][] is a utility tool that would +properly destroy all the streams in a pipeline if one of them fails or closes, +and is a must have in this case! + +[`pump`][] is only necessary for Nodejs 8.x or earlier, as for Node 10.x +or later version, [`pipeline`][] is introduced to replace for [`pump`][]. +This is a module method to pipe between streams forwarding errors and properly +cleaning up and provide a callback when the pipeline is complete. + +Here is an example of using pipeline: + +```javascript +const { pipeline } = require('stream'); +const fs = require('fs'); +const zlib = require('zlib'); + +// Use the pipeline API to easily pipe a series of streams +// together and get notified when the pipeline is fully done. +// A pipeline to gzip a potentially huge video file efficiently: + +pipeline( + fs.createReadStream('The.Matrix.1080p.mkv'), + zlib.createGzip(), + fs.createWriteStream('The.Matrix.1080p.mkv.gz'), + (err) => { + if (err) { + console.error('Pipeline failed', err); + } else { + console.log('Pipeline succeeded'); + } + } +); +``` +You can also call [`promisify`][] on pipeline to use it with `async` / `await`: + +```javascript +const stream = require('stream'); +const fs = require('fs'); +const zlib = require('zlib'); +const util = require('util'); + +const pipeline = util.promisify(stream.pipeline); + +async function run() { + try { + await pipeline( + fs.createReadStream('The.Matrix.1080p.mkv'), + zlib.createGzip(), + fs.createWriteStream('The.Matrix.1080p.mkv.gz'), + ); + console.log('Pipeline succeeded'); + } catch (err) { + console.error('Pipeline failed', err); + } +} +``` + +## Too Much Data, Too Quickly + +There are instances where a [`Readable`][] stream might give data to the +[`Writable`][] much too quickly — much more than the consumer can handle! + +When that occurs, the consumer will begin to queue all the chunks of data for +later consumption. The write queue will get longer and longer, and because of +this more data must be kept in memory until the entire process has completed. + +Writing to a disk is a lot slower than reading from a disk, thus, when we are +trying to compress a file and write it to our hard disk, backpressure will +occur because the write disk will not be able to keep up with the speed from +the read. + +```javascript +// Secretly the stream is saying: "whoa, whoa! hang on, this is way too much!" +// Data will begin to build up on the read-side of the data buffer as +// `write` tries to keep up with the incoming data flow. +inp.pipe(gzip).pipe(outputFile); +``` +This is why a backpressure mechanism is important. If a backpressure system was +not present, the process would use up your system's memory, effectively slowing +down other processes, and monopolizing a large part of your system until +completion. + +This results in a few things: + +* Slowing down all other current processes +* A very overworked garbage collector +* Memory exhaustion + +In the following examples we will take out the [return value][] of the +`.write()` function and change it to `true`, which effectively disables +backpressure support in Node.js core. In any reference to 'modified' binary, +we are talking about running the `node` binary without the `return ret;` line, +and instead with the replaced `return true;`. + +## Excess Drag on Garbage Collection + +Let's take a look at a quick benchmark. Using the same example from above, we +ran a few time trials to get a median time for both binaries. + + +```javascript + trial (#) | `node` binary (ms) | modified `node` binary (ms) +================================================================= + 1 | 56924 | 55011 + 2 | 52686 | 55869 + 3 | 59479 | 54043 + 4 | 54473 | 55229 + 5 | 52933 | 59723 +================================================================= +average time: | 55299 | 55975 +``` + +Both take around a minute to run, so there's not much of a difference at all, +but let's take a closer look to confirm whether our suspicions are correct. We +use the Linux tool [`dtrace`][] to evaluate what's happening with the V8 garbage +collector. + +The GC (garbage collector) measured time indicates the intervals of a full cycle +of a single sweep done by the garbage collector: + + +```javascript +approx. time (ms) | GC (ms) | modified GC (ms) +================================================= + 0 | 0 | 0 + 1 | 0 | 0 + 40 | 0 | 2 + 170 | 3 | 1 + 300 | 3 | 1 + + * * * + * * * + * * * + + 39000 | 6 | 26 + 42000 | 6 | 21 + 47000 | 5 | 32 + 50000 | 8 | 28 + 54000 | 6 | 35 +``` +While the two processes start off the same and seem to work the GC at the same +rate, it becomes evident that after a few seconds with a properly working +backpressure system in place, it spreads the GC load across consistent +intervals of 4-8 milliseconds until the end of the data transfer. + +However, when a backpressure system is not in place, the V8 garbage collection +starts to drag out. The normal binary called the GC approximately __75__ +times in a minute, whereas, the modified binary fires only __36__ times. + +This is the slow and gradual debt accumulating from growing memory usage. As +data gets transferred, without a backpressure system in place, more memory is +being used for each chunk transfer. + +The more memory that is being allocated, the more the GC has to take care of in +one sweep. The bigger the sweep, the more the GC needs to decide what can be +freed up, and scanning for detached pointers in a larger memory space will +consume more computing power. + +## Memory Exhaustion + +To determine the memory consumption of each binary, we've clocked each process +with `/usr/bin/time -lp sudo ./node ./backpressure-example/zlib.js` +individually. + +This is the output on the normal binary: + + +```javascript +Respecting the return value of .write() +============================================= +real 58.88 +user 56.79 +sys 8.79 + 87810048 maximum resident set size + 0 average shared memory size + 0 average unshared data size + 0 average unshared stack size + 19427 page reclaims + 3134 page faults + 0 swaps + 5 block input operations + 194 block output operations + 0 messages sent + 0 messages received + 1 signals received + 12 voluntary context switches + 666037 involuntary context switches +``` + +The maximum byte size occupied by virtual memory turns out to be approximately +87.81 mb. + +And now changing the [return value][] of the [`.write()`][] function, we get: + + +```javascript +Without respecting the return value of .write(): +================================================== +real 54.48 +user 53.15 +sys 7.43 +1524965376 maximum resident set size + 0 average shared memory size + 0 average unshared data size + 0 average unshared stack size + 373617 page reclaims + 3139 page faults + 0 swaps + 18 block input operations + 199 block output operations + 0 messages sent + 0 messages received + 1 signals received + 25 voluntary context switches + 629566 involuntary context switches +``` + +The maximum byte size occupied by virtual memory turns out to be approximately +1.52 gb. + +Without streams in place to delegate the backpressure, there is an order of +magnitude greater of memory space being allocated - a huge margin of +difference between the same process! + +This experiment shows how optimized and cost-effective Node.js' backpressure +mechanism is for your computing system. Now, let's do a break down on how it +works! + +## How Does Backpressure Resolve These Issues? + +There are different functions to transfer data from one process to another. In +Node.js, there is an internal built-in function called [`.pipe()`][]. There are +[other packages][] out there you can use too! Ultimately though, at the basic +level of this process, we have two separate components: the _source_ of the +data and the _consumer_. + +When [`.pipe()`][] is called from the source, it signals to the consumer that +there is data to be transferred. The pipe function helps to set up the +appropriate backpressure closures for the event triggers. + +In Node.js the source is a [`Readable`][] stream and the consumer is the +[`Writable`][] stream (both of these may be interchanged with a [`Duplex`][] or +a [`Transform`][] stream, but that is out-of-scope for this guide). + +The moment that backpressure is triggered can be narrowed exactly to the return +value of a [`Writable`][]'s [`.write()`][] function. This return value is +determined by a few conditions, of course. + +In any scenario where the data buffer has exceeded the [`highWaterMark`][] or +the write queue is currently busy, [`.write()`][] will return `false`. + +When a `false` value is returned, the backpressure system kicks in. It will +pause the incoming [`Readable`][] stream from sending any data and wait until +the consumer is ready again. Once the data buffer is emptied, a [`.drain()`][] +event will be emitted and resume the incoming data flow. + +Once the queue is finished, backpressure will allow data to be sent again. +The space in memory that was being used will free itself up and prepare for the +next batch of data. + +This effectively allows a fixed amount of memory to be used at any given +time for a [`.pipe()`][] function. There will be no memory leakage, no +infinite buffering, and the garbage collector will only have to deal with +one area in memory! + +So, if backpressure is so important, why have you (probably) not heard of it? +Well the answer is simple: Node.js does all of this automatically for you. + +That's so great! But also not so great when we are trying to understand how to +implement our own custom streams. + +Note: In most machines, there is a byte size that determines when a buffer +is full (which will vary across different machines). Node.js allows you to set +your own custom [`highWaterMark`][], but commonly, the default is set to 16kb +(16384, or 16 for objectMode streams). In instances where you might +want to raise that value, go for it, but do so with caution! + +## Lifecycle of `.pipe()` + +To achieve a better understanding of backpressure, here is a flow-chart on the +lifecycle of a [`Readable`][] stream being [piped][] into a [`Writable`][] +stream: + + +```javascript + +===================+ + x--> Piping functions +--> src.pipe(dest) | + x are set up during |===================| + x the .pipe method. | Event callbacks | + +===============+ x |-------------------| + | Your Data | x They exist outside | .on('close', cb) | + +=======+=======+ x the data flow, but | .on('data', cb) | + | x importantly attach | .on('drain', cb) | + | x events, and their | .on('unpipe', cb) | ++---------v---------+ x respective callbacks. | .on('error', cb) | +| Readable Stream +----+ | .on('finish', cb) | ++-^-------^-------^-+ | | .on('end', cb) | + ^ | ^ | +-------------------+ + | | | | + | ^ | | + ^ ^ ^ | +-------------------+ +=================+ + ^ | ^ +----> Writable Stream +---------> .write(chunk) | + | | | +-------------------+ +=======+=========+ + | | | | + | ^ | +------------------v---------+ + ^ | +-> if (!chunk) | Is this chunk too big? | + ^ | | emit .end(); | Is the queue busy? | + | | +-> else +-------+----------------+---+ + | ^ | emit .write(); | | + | ^ ^ +--v---+ +---v---+ + | | ^-----------------------------------< No | | Yes | + ^ | +------+ +---v---+ + ^ | | + | ^ emit .pause(); +=================+ | + | ^---------------^-----------------------+ return false; <-----+---+ + | +=================+ | + | | + ^ when queue is empty +============+ | + ^------------^-----------------------< Buffering | | + | |============| | + +> emit .drain(); | ^Buffer^ | | + +> emit .resume(); +------------+ | + | ^Buffer^ | | + +------------+ add chunk to queue | + | <---^---------------------< + +============+ +``` + +Note: If you are setting up a pipeline to chain together a few streams to +manipulate your data, you will most likely be implementing [`Transform`][] +stream. + +In this case, your output from your [`Readable`][] stream will enter in the +[`Transform`][] and will pipe into the [`Writable`][]. + +```javascript +Readable.pipe(Transformable).pipe(Writable); +``` + +Backpressure will be automatically applied, but note that both the incoming and +outgoing `highWaterMark` of the [`Transform`][] stream may be manipulated and +will effect the backpressure system. + +## Backpressure Guidelines + +Since [Node.js v0.10][], the [`Stream`][] class has offered the ability to +modify the behaviour of the [`.read()`][] or [`.write()`][] by using the +underscore version of these respective functions ([`._read()`][] and +[`._write()`][]). + +There are guidelines documented for [implementing Readable streams][] and +[implementing Writable streams][]. We will assume you've read these over, and +the next section will go a little bit more in-depth. + +## Rules to Abide By When Implementing Custom Streams + +The golden rule of streams is __to always respect backpressure__. What +constitutes as best practice is non-contradictory practice. So long as you are +careful to avoid behaviours that conflict with internal backpressure support, +you can be sure you're following good practice. + +In general, + +1. Never `.push()` if you are not asked. +2. Never call `.write()` after it returns false but wait for 'drain' instead. +3. Streams changes between different Node.js versions, and the library you use. +Be careful and test things. + +Note: In regards to point 3, an incredibly useful package for building +browser streams is [`readable-stream`][]. Rodd Vagg has written a +[great blog post][] describing the utility of this library. In short, it +provides a type of automated graceful degradation for [`Readable`][] streams, +and supports older versions of browsers and Node.js. + +## Rules specific to Readable Streams + +So far, we have taken a look at how [`.write()`][] affects backpressure and have +focused much on the [`Writable`][] stream. Because of Node.js' functionality, +data is technically flowing downstream from [`Readable`][] to [`Writable`][]. +However, as we can observe in any transmission of data, matter, or energy, the +source is just as important as the destination and the [`Readable`][] stream +is vital to how backpressure is handled. + +Both these processes rely on one another to communicate effectively, if +the [`Readable`][] ignores when the [`Writable`][] stream asks for it to stop +sending in data, it can be just as problematic to when the [`.write()`][]'s return +value is incorrect. + +So, as well with respecting the [`.write()`][] return, we must also respect the +return value of [`.push()`][] used in the [`._read()`][] method. If +[`.push()`][] returns a `false` value, the stream will stop reading from the +source. Otherwise, it will continue without pause. + +Here is an example of bad practice using [`.push()`][]: +```javascript +// This is problematic as it completely ignores return value from push +// which may be a signal for backpressure from the destination stream! +class MyReadable extends Readable { + _read(size) { + let chunk; + while (null !== (chunk = getNextChunk())) { + this.push(chunk); + } + } +} +``` + +Additionally, from outside the custom stream, there are pratfalls for ignoring +backpressure. In this counter-example of good practice, the application's code +forces data through whenever it is available (signaled by the +[`.data` event][]): +```javascript +// This ignores the backpressure mechanisms Node.js has set in place, +// and unconditionally pushes through data, regardless if the +// destination stream is ready for it or not. +readable.on('data', (data) => + writable.write(data) +); +``` + +## Rules specific to Writable Streams + +Recall that a [`.write()`][] may return true or false dependent on some +conditions. Luckily for us, when building our own [`Writable`][] stream, +the [`stream state machine`][] will handle our callbacks and determine when to +handle backpressure and optimize the flow of data for us. + +However, when we want to use a [`Writable`][] directly, we must respect the +[`.write()`][] return value and pay close attention to these conditions: + +* If the write queue is busy, [`.write()`][] will return false. +* If the data chunk is too large, [`.write()`][] will return false (the limit +is indicated by the variable, [`highWaterMark`][]). + + +```javascript +// This writable is invalid because of the async nature of JavaScript callbacks. +// Without a return statement for each callback prior to the last, +// there is a great chance multiple callbacks will be called. +class MyWritable extends Writable { + _write(chunk, encoding, callback) { + if (chunk.toString().indexOf('a') >= 0) + callback(); + else if (chunk.toString().indexOf('b') >= 0) + callback(); + callback(); + } +} + +// The proper way to write this would be: + if (chunk.contains('a')) + return callback(); + else if (chunk.contains('b')) + return callback(); + callback(); +``` + +There are also some things to look out for when implementing [`._writev()`][]. +The function is coupled with [`.cork()`][], but there is a common mistake when +writing: +```javascript +// Using .uncork() twice here makes two calls on the C++ layer, rendering the +// cork/uncork technique useless. +ws.cork(); +ws.write('hello '); +ws.write('world '); +ws.uncork(); + +ws.cork(); +ws.write('from '); +ws.write('Matteo'); +ws.uncork(); + +// The correct way to write this is to utilize process.nextTick(), which fires +// on the next event loop. +ws.cork(); +ws.write('hello '); +ws.write('world '); +process.nextTick(doUncork, ws); + +ws.cork(); +ws.write('from '); +ws.write('Matteo'); +process.nextTick(doUncork, ws); + +// as a global function +function doUncork(stream) { + stream.uncork(); +} +``` + +[`.cork()`][] can be called as many times we want, we just need to be careful to +call [`.uncork()`][] the same amount of times to make it flow again. + +## Conclusion + +Streams are an often used module in Node.js. They are important to the internal +structure, and for developers, to expand and connect across the Node.js modules +ecosystem. + +Hopefully, you will now be able to troubleshoot, safely code your own +[`Writable`][] and [`Readable`][] streams with backpressure in mind, and share +your knowledge with colleagues and friends. + +Be sure to read up more on [`Stream`][] for other API functions to help +improve and unleash your streaming capabilities when building an application with +Node.js. + + +[`Stream`]: https://nodejs.org/api/stream.html +[`Buffer`]: https://nodejs.org/api/buffer.html +[`EventEmitters`]: https://nodejs.org/api/events.html +[`Writable`]: https://nodejs.org/api/stream.html#stream_writable_streams +[`Readable`]: https://nodejs.org/api/stream.html#stream_readable_streams +[`Duplex`]: https://nodejs.org/api/stream.html#stream_duplex_and_transform_streams +[`Transform`]: https://nodejs.org/api/stream.html#stream_duplex_and_transform_streams +[`zlib`]: https://nodejs.org/api/zlib.html +[`.drain()`]: https://nodejs.org/api/stream.html#stream_event_drain +[`.data` event]: https://nodejs.org/api/stream.html#stream_event_data +[`.read()`]: https://nodejs.org/docs/latest/api/stream.html#stream_readable_read_size +[`.write()`]: https://nodejs.org/api/stream.html#stream_writable_write_chunk_encoding_callback +[`._read()`]: https://nodejs.org/docs/latest/api/stream.html#stream_readable_read_size_1 +[`._write()`]: https://nodejs.org/docs/latest/api/stream.html#stream_writable_write_chunk_encoding_callback_1 +[`._writev()`]: https://nodejs.org/api/stream.html#stream_writable_writev_chunks_callback +[`.cork()`]: https://nodejs.org/api/stream.html#stream_writable_cork +[`.uncork()`]: https://nodejs.org/api/stream.html#stream_writable_uncork + +[`.push()`]: https://nodejs.org/docs/latest/api/stream.html#stream_readable_push_chunk_encoding + +[implementing Writable streams]: https://nodejs.org/docs/latest/api/stream.html#stream_implementing_a_writable_stream +[implementing Readable streams]: https://nodejs.org/docs/latest/api/stream.html#stream_implementing_a_readable_stream + +[other packages]: https://github.com/sindresorhus/awesome-nodejs#streams +[`backpressure`]: https://en.wikipedia.org/wiki/Back_pressure#Backpressure_in_information_technology +[Node.js v0.10]: https://nodejs.org/docs/v0.10.0/ +[`highWaterMark`]: https://nodejs.org/api/stream.html#stream_buffering +[return value]: https://github.com/nodejs/node/blob/55c42bc6e5602e5a47fb774009cfe9289cb88e71/lib/_stream_writable.js#L239 + +[`readable-stream`]: https://github.com/nodejs/readable-stream +[great blog post]:https://r.va.gg/2014/06/why-i-dont-use-nodes-core-stream-module.html + +[`dtrace`]: http://dtrace.org/blogs/about/ +[`zip(1)`]: https://linux.die.net/man/1/zip +[`gzip(1)`]: https://linux.die.net/man/1/gzip +[`stream state machine`]: https://en.wikipedia.org/wiki/Finite-state_machine + +[`.pipe()`]: https://nodejs.org/docs/latest/api/stream.html#stream_readable_pipe_destination_options +[piped]: https://nodejs.org/docs/latest/api/stream.html#stream_readable_pipe_destination_options +[`pump`]: https://github.com/mafintosh/pump +[`pipeline`]: https://nodejs.org/api/stream.html#stream_stream_pipeline_streams_callback +[`promisify`]: https://nodejs.org/api/util.html#util_util_promisify_original diff --git a/locale/pt-br/docs/guides/blocking-vs-non-blocking.md b/locale/pt-br/docs/guides/blocking-vs-non-blocking.md new file mode 100644 index 0000000000000..cb38d766faa99 --- /dev/null +++ b/locale/pt-br/docs/guides/blocking-vs-non-blocking.md @@ -0,0 +1,148 @@ +--- +title: Overview of Blocking vs Non-Blocking +layout: docs.hbs +--- + +# Overview of Blocking vs Non-Blocking + +This overview covers the difference between **blocking** and **non-blocking** +calls in Node.js. This overview will refer to the event loop and libuv but no +prior knowledge of those topics is required. Readers are assumed to have a +basic understanding of the JavaScript language and Node.js [callback pattern](https://nodejs.org/en/knowledge/getting-started/control-flow/what-are-callbacks/). + +> "I/O" refers primarily to interaction with the system's disk and +> network supported by [libuv](http://libuv.org/). + + +## Blocking + +**Blocking** is when the execution of additional JavaScript in the Node.js +process must wait until a non-JavaScript operation completes. This happens +because the event loop is unable to continue running JavaScript while a +**blocking** operation is occurring. + +In Node.js, JavaScript that exhibits poor performance due to being CPU intensive +rather than waiting on a non-JavaScript operation, such as I/O, isn't typically +referred to as **blocking**. Synchronous methods in the Node.js standard library +that use libuv are the most commonly used **blocking** operations. Native +modules may also have **blocking** methods. + +All of the I/O methods in the Node.js standard library provide asynchronous +versions, which are **non-blocking**, and accept callback functions. Some +methods also have **blocking** counterparts, which have names that end with +`Sync`. + + +## Comparing Code + +**Blocking** methods execute **synchronously** and **non-blocking** methods +execute **asynchronously**. + +Using the File System module as an example, this is a **synchronous** file read: + +```js +const fs = require('fs'); +const data = fs.readFileSync('/file.md'); // blocks here until file is read +``` + +And here is an equivalent **asynchronous** example: + +```js +const fs = require('fs'); +fs.readFile('/file.md', (err, data) => { + if (err) throw err; +}); +``` + +The first example appears simpler than the second but has the disadvantage of +the second line **blocking** the execution of any additional JavaScript until +the entire file is read. Note that in the synchronous version if an error is +thrown it will need to be caught or the process will crash. In the asynchronous +version, it is up to the author to decide whether an error should throw as +shown. + +Let's expand our example a little bit: + +```js +const fs = require('fs'); +const data = fs.readFileSync('/file.md'); // blocks here until file is read +console.log(data); +moreWork(); // will run after console.log +``` + +And here is a similar, but not equivalent asynchronous example: + +```js +const fs = require('fs'); +fs.readFile('/file.md', (err, data) => { + if (err) throw err; + console.log(data); +}); +moreWork(); // will run before console.log +``` + +In the first example above, `console.log` will be called before `moreWork()`. In +the second example `fs.readFile()` is **non-blocking** so JavaScript execution +can continue and `moreWork()` will be called first. The ability to run +`moreWork()` without waiting for the file read to complete is a key design +choice that allows for higher throughput. + + +## Concurrency and Throughput + +JavaScript execution in Node.js is single threaded, so concurrency refers to the +event loop's capacity to execute JavaScript callback functions after completing +other work. Any code that is expected to run in a concurrent manner must allow +the event loop to continue running as non-JavaScript operations, like I/O, are +occurring. + +As an example, let's consider a case where each request to a web server takes +50ms to complete and 45ms of that 50ms is database I/O that can be done +asynchronously. Choosing **non-blocking** asynchronous operations frees up that +45ms per request to handle other requests. This is a significant difference in +capacity just by choosing to use **non-blocking** methods instead of +**blocking** methods. + +The event loop is different than models in many other languages where additional +threads may be created to handle concurrent work. + + +## Dangers of Mixing Blocking and Non-Blocking Code + +There are some patterns that should be avoided when dealing with I/O. Let's look +at an example: + +```js +const fs = require('fs'); +fs.readFile('/file.md', (err, data) => { + if (err) throw err; + console.log(data); +}); +fs.unlinkSync('/file.md'); +``` + +In the above example, `fs.unlinkSync()` is likely to be run before +`fs.readFile()`, which would delete `file.md` before it is actually read. A +better way to write this, which is completely **non-blocking** and guaranteed to +execute in the correct order is: + + +```js +const fs = require('fs'); +fs.readFile('/file.md', (readFileErr, data) => { + if (readFileErr) throw readFileErr; + console.log(data); + fs.unlink('/file.md', (unlinkErr) => { + if (unlinkErr) throw unlinkErr; + }); +}); +``` + +The above places a **non-blocking** call to `fs.unlink()` within the callback of +`fs.readFile()` which guarantees the correct order of operations. + + +## Additional Resources + +- [libuv](http://libuv.org/) +- [About Node.js](https://nodejs.org/en/about/) diff --git a/locale/pt-br/docs/guides/buffer-constructor-deprecation.md b/locale/pt-br/docs/guides/buffer-constructor-deprecation.md new file mode 100644 index 0000000000000..5d07bb4ea7595 --- /dev/null +++ b/locale/pt-br/docs/guides/buffer-constructor-deprecation.md @@ -0,0 +1,281 @@ +--- +title: Porting to the Buffer.from()/Buffer.alloc() API +layout: docs.hbs +--- + +# Porting to the `Buffer.from()`/`Buffer.alloc()` API + +## Overview + +This guide explains how to migrate to safe `Buffer` constructor methods. The migration fixes the following deprecation warning: + +
+The Buffer() and new Buffer() constructors are not recommended for use due to security and usability concerns. Please use the new Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() construction methods instead. +
+ +- [Variant 1: Drop support for Node.js ≤ 4.4.x and 5.0.0 — 5.9.x](#variant-1) (*recommended*) +- [Variant 2: Use a polyfill](#variant-2) +- [Variant 3: Manual detection, with safeguards](#variant-3) + +### Finding problematic bits of code using `grep` + +Just run `grep -nrE '[^a-zA-Z](Slow)?Buffer\s*\(' --exclude-dir node_modules`. + +It will find all the potentially unsafe places in your own code (with some considerably unlikely +exceptions). + +### Finding problematic bits of code using Node.js 8 + +If you’re using Node.js ≥ 8.0.0 (which is recommended), Node.js exposes multiple options that help with finding the relevant pieces of code: + +- `--trace-warnings` will make Node.js show a stack trace for this warning and other warnings that are printed by Node.js. +- `--trace-deprecation` does the same thing, but only for deprecation warnings. +- `--pending-deprecation` will show more types of deprecation warnings. In particular, it will show the `Buffer()` deprecation warning, even on Node.js 8. + +You can set these flags using environment variables: + +```bash +$ export NODE_OPTIONS='--trace-warnings --pending-deprecation' +$ cat example.js +'use strict'; +const foo = new Buffer('foo'); +$ node example.js +(node:7147) [DEP0005] DeprecationWarning: The Buffer() and new Buffer() constructors are not recommended for use due to security and usability concerns. Please use the new Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() construction methods instead. + at showFlaggedDeprecation (buffer.js:127:13) + at new Buffer (buffer.js:148:3) + at Object. (/path/to/example.js:2:13) + [... more stack trace lines ...] +``` + +### Finding problematic bits of code using linters + +ESLint rules [no-buffer-constructor](https://eslint.org/docs/rules/no-buffer-constructor) +or +[node/no-deprecated-api](https://github.com/mysticatea/eslint-plugin-node/blob/master/docs/rules/no-deprecated-api.md) +also find calls to deprecated `Buffer()` API. Those rules are included in some presets. + +There is a drawback, though, that it doesn't always +[work correctly](https://github.com/chalker/safer-buffer#why-not-safe-buffer) when `Buffer` is +overridden e.g. with a polyfill, so recommended is a combination of this and some other method +described above. + +## Variant 1: Drop support for Node.js ≤ 4.4.x and 5.0.0 — 5.9.x + +This is the recommended solution nowadays that would imply only minimal overhead. + +The Node.js 5.x release line has been unsupported since July 2016, and the Node.js 4.x release line reaches its End of Life in April 2018 (→ [Schedule](https://github.com/nodejs/Release#release-schedule)). This means that these versions of Node.js will *not* receive any updates, even in case of security issues, so using these release lines should be avoided, if at all possible. + +What you would do in this case is to convert all `new Buffer()` or `Buffer()` calls to use `Buffer.alloc()` or `Buffer.from()`, in the following way: + +- For `new Buffer(number)`, replace it with `Buffer.alloc(number)`. +- For `new Buffer(string)` (or `new Buffer(string, encoding)`), replace it with `Buffer.from(string)` (or `Buffer.from(string, encoding)`). +- For all other combinations of arguments (these are much rarer), also replace `new Buffer(...arguments)` with `Buffer.from(...arguments)`. + +Note that `Buffer.alloc()` is also _faster_ on the current Node.js versions than +`new Buffer(size).fill(0)`, which is what you would otherwise need to ensure zero-filling. + +Enabling ESLint rule [no-buffer-constructor](https://eslint.org/docs/rules/no-buffer-constructor) +or +[node/no-deprecated-api](https://github.com/mysticatea/eslint-plugin-node/blob/master/docs/rules/no-deprecated-api.md) +is recommended to avoid accidental unsafe `Buffer` API usage. + +There is also a [JSCodeshift codemod](https://github.com/joyeecheung/node-dep-codemod#dep005) +for automatically migrating `Buffer` constructors to `Buffer.alloc()` or `Buffer.from()`. +Note that it currently only works with cases where the arguments are literals or where the +constructor is invoked with two arguments. + +_If you currently support those older Node.js versions and dropping support for them is not possible, or if you support older branches of your packages, consider using [Variant 2](#variant-2) +or [Variant 3](#variant-3) on older branches, so people using those older branches will also receive +the fix. That way, you will eradicate potential issues caused by unguarded `Buffer` API usage and +your users will not observe a runtime deprecation warning when running your code on Node.js 10._ + +## Variant 2: Use a polyfill + +There are three different polyfills available: + +- **[safer-buffer](https://www.npmjs.com/package/safer-buffer)** is a drop-in replacement for the + entire `Buffer` API, that will _throw_ when using `new Buffer()`. + + You would take exactly the same steps as in [Variant 1](#variant-1), but with a polyfill + `const Buffer = require('safer-buffer').Buffer` in all files where you use the new `Buffer` API. + + Do not use the old `new Buffer()` API. In any files where the line above is added, + using old `new Buffer()` API will _throw_. + +- **[buffer-from](https://www.npmjs.com/package/buffer-from) and/or + [buffer-alloc](https://www.npmjs.com/package/buffer-alloc)** are + [ponyfills](https://ponyfill.com/) for their respective part of the `Buffer` API. You only need + to add the package(s) corresponding to the API you are using. + + You would import the module needed with an appropriate name, e.g. + `const bufferFrom = require('buffer-from')` and then use that instead of the call to + `new Buffer()`, e.g. `new Buffer('test')` becomes `bufferFrom('test')`. + + A downside with this approach is slightly more code changes to migrate off them (as you would be + using e.g. `Buffer.from()` under a different name). + +- **[safe-buffer](https://www.npmjs.com/package/safe-buffer)** is also a drop-in replacement for + the entire `Buffer` API, but using `new Buffer()` will still work as before. + + A downside to this approach is that it will allow you to also use the older `new Buffer()` API + in your code, which is problematic since it can cause issues in your code, and will start + emitting runtime deprecation warnings starting with Node.js 10 + ([read more here](https://github.com/chalker/safer-buffer#why-not-safe-buffer)). + +Note that in either case, it is important that you also remove all calls to the old `Buffer` +API manually — just throwing in `safe-buffer` doesn't fix the problem by itself, it just provides +a polyfill for the new API. I have seen people doing that mistake. + +Enabling ESLint rule [no-buffer-constructor](https://eslint.org/docs/rules/no-buffer-constructor) +or +[node/no-deprecated-api](https://github.com/mysticatea/eslint-plugin-node/blob/master/docs/rules/no-deprecated-api.md) +is recommended. + +_Don't forget to drop the polyfill usage once you drop support for Node.js < 4.5.0._ + +## Variant 3 — Manual detection, with safeguards + +This is useful if you create `Buffer` instances in only a few places (e.g. one), or you have your own +wrapper around them. + +### `Buffer(0)` + +This special case for creating empty buffers can be safely replaced with `Buffer.concat([])`, which +returns the same result all the way down to Node.js 0.8.x. + +### `Buffer(notNumber)` + +Before: + +```js +const buf = new Buffer(notNumber, encoding); +``` + +After: + +```js +let buf; +if (Buffer.from && Buffer.from !== Uint8Array.from) { + buf = Buffer.from(notNumber, encoding); +} else { + if (typeof notNumber === 'number') { + throw new Error('The "size" argument must be not of type number.'); + } + buf = new Buffer(notNumber, encoding); +} +``` + +`encoding` is optional. + +Note that the `typeof notNumber` before `new Buffer()` is required (for cases when `notNumber` argument is not +hard-coded) and _is not caused by the deprecation of `Buffer` constructor_ — it's exactly _why_ the +`Buffer` constructor is deprecated. Ecosystem packages lacking this type-check caused numerous +security issues — situations when unsanitized user input could end up in the `Buffer(arg)` create +problems ranging from DoS to leaking sensitive information to the attacker from the process memory. + +When `notNumber` argument is hardcoded (e.g. literal `"abc"` or `[0,1,2]`), the `typeof` check can +be omitted. + +Also, note that using TypeScript does not fix this problem for you — when libs written in +`TypeScript` are used from JS, or when user input ends up there — it behaves exactly as pure JS, as +all type checks are translation-time only and are not present in the actual JS code which TS +compiles to. + +### `Buffer(number)` + +For Node.js 0.10.x (and below) support: + +```js +var buf; +if (Buffer.alloc) { + buf = Buffer.alloc(number); +} else { + buf = new Buffer(number); + buf.fill(0); +} +``` + +Otherwise (Node.js ≥ 0.12.x): + +```js +const buf = Buffer.alloc ? Buffer.alloc(number) : new Buffer(number).fill(0); +``` + +## Regarding `Buffer.allocUnsafe()` + +Be extra cautious when using `Buffer.allocUnsafe()`: + * Don't use it if you don't have a good reason to + * e.g. you probably won't ever see a performance difference for small buffers, in fact, those + might be even faster with `Buffer.alloc()`, + * if your code is not in the hot code path — you also probably won't notice a difference, + * keep in mind that zero-filling minimizes the potential risks. + * If you use it, make sure that you never return the buffer in a partially-filled state, + * if you are writing to it sequentially — always truncate it to the actual written length + +Errors in handling buffers allocated with `Buffer.allocUnsafe()` could result in various issues, +ranged from undefined behavior of your code to sensitive data (user input, passwords, certs) +leaking to the remote attacker. + +_Note that the same applies to `new Buffer()` usage without zero-filling, depending on the Node.js +version (and lacking type checks also adds DoS to the list of potential problems)._ + +## FAQ + +### What is wrong with the `Buffer` constructor? + +The `Buffer` constructor could be used to create a buffer in many different ways: + +- `new Buffer(42)` creates a `Buffer` of 42 bytes. Before Node.js 8, this buffer contained + *arbitrary memory* for performance reasons, which could include anything ranging from + program source code to passwords and encryption keys. +- `new Buffer('abc')` creates a `Buffer` that contains the UTF-8-encoded version of + the string `'abc'`. A second argument could specify another encoding: for example, + `new Buffer(string, 'base64')` could be used to convert a Base64 string into the original + sequence of bytes that it represents. +- There are several other combinations of arguments. + +This meant that in code like `var buffer = new Buffer(foo);`, *it is not possible to tell +what exactly the contents of the generated buffer are* without knowing the type of `foo`. + +Sometimes, the value of `foo` comes from an external source. For example, this function +could be exposed as a service on a web server, converting a UTF-8 string into its Base64 form: + +```js +function stringToBase64(req, res) { + // The request body should have the format of `{ string: 'foobar' }`. + const rawBytes = new Buffer(req.body.string); + const encoded = rawBytes.toString('base64'); + res.end({ encoded }); +} +``` + +Note that this code does *not* validate the type of `req.body.string`: + +- `req.body.string` is expected to be a string. If this is the case, all goes well. +- `req.body.string` is controlled by the client that sends the request. +- If `req.body.string` is the *number* `50`, the `rawBytes` would be `50` bytes: + - Before Node.js 8, the content would be uninitialized + - After Node.js 8, the content would be `50` bytes with the value `0` + +Because of the missing type check, an attacker could intentionally send a number +as part of the request. Using this, they can either: + +- Read uninitialized memory. This **will** leak passwords, encryption keys and other + kinds of sensitive information. (Information leak) +- Force the program to allocate a large amount of memory. For example, when specifying + `500000000` as the input value, each request will allocate 500MB of memory. + This can be used to either exhaust the memory available of a program completely + and make it crash, or slow it down significantly. (Denial of Service) + +Both of these scenarios are considered serious security issues in a real-world +web server context. + +When using `Buffer.from(req.body.string)` instead, passing a number will always +throw an exception instead, giving a controlled behavior that can always be +handled by the program. + +### The `Buffer()` constructor has been deprecated for a while. Is this really an issue? + +Surveys of code in the `npm` ecosystem have shown that the `Buffer()` constructor is still +widely used. This includes new code, and overall usage of such code has actually been +*increasing*. diff --git a/locale/pt-br/docs/guides/debugging-getting-started.md b/locale/pt-br/docs/guides/debugging-getting-started.md new file mode 100644 index 0000000000000..0f29680104c2f --- /dev/null +++ b/locale/pt-br/docs/guides/debugging-getting-started.md @@ -0,0 +1,244 @@ +--- +title: Debugging - Getting Started +layout: docs.hbs +--- + +# Debugging Guide + +This guide will help you get started debugging your Node.js apps and scripts. + +## Enable Inspector + +When started with the `--inspect` switch, a Node.js process listens for a +debugging client. By default, it will listen at host and port 127.0.0.1:9229. +Each process is also assigned a unique [UUID][]. + +Inspector clients must know and specify host address, port, and UUID to connect. +A full URL will look something like +`ws://127.0.0.1:9229/0f2c936f-b1cd-4ac9-aab3-f63b0f33d55e`. + +Node.js will also start listening for debugging messages if it receives a +`SIGUSR1` signal. (`SIGUSR1` is not available on Windows.) In Node.js 7 and +earlier, this activates the legacy Debugger API. In Node.js 8 and later, it will +activate the Inspector API. + +--- +## Security Implications + +Since the debugger has full access to the Node.js execution environment, a +malicious actor able to connect to this port may be able to execute arbitrary +code on behalf of the Node process. It is important to understand the security +implications of exposing the debugger port on public and private networks. + +### Exposing the debug port publicly is unsafe + +If the debugger is bound to a public IP address, or to 0.0.0.0, any clients that +can reach your IP address will be able to connect to the debugger without any +restriction and will be able to run arbitrary code. + +By default `node --inspect` binds to 127.0.0.1. You explicitly need to provide a +public IP address or 0.0.0.0, etc., if you intend to allow external connections +to the debugger. Doing so may expose you to a potentially significant security +threat. We suggest you ensure appropriate firewalls and access controls in place +to prevent a security exposure. + +See the section on '[Enabling remote debugging scenarios](#enabling-remote-debugging-scenarios)' on some advice on how +to safely allow remote debugger clients to connect. + +### Local applications have full access to the inspector + +Even if you bind the inspector port to 127.0.0.1 (the default), any applications +running locally on your machine will have unrestricted access. This is by design +to allow local debuggers to be able to attach conveniently. + +### Browsers, WebSockets and same-origin policy + +Websites open in a web-browser can make WebSocket and HTTP requests under the +browser security model. An initial HTTP connection is necessary to obtain a +unique debugger session id. The same-origin-policy prevents websites from being +able to make this HTTP connection. For additional security against +[DNS rebinding attacks](https://en.wikipedia.org/wiki/DNS_rebinding), Node.js +verifies that the 'Host' headers for the connection either +specify an IP address or `localhost` or `localhost6` precisely. + +These security policies disallow connecting to a remote debug server by +specifying the hostname. You can work-around this restriction by specifying +either the IP address or by using ssh tunnels as described below. + +## Inspector Clients + +Several commercial and open source tools can connect to Node's Inspector. Basic +info on these follows: + +#### [node-inspect](https://github.com/nodejs/node-inspect) + +* CLI Debugger supported by the Node.js Foundation which uses the [Inspector Protocol][]. +* A version is bundled with Node and can be used with `node inspect myscript.js`. +* The latest version can also be installed independently (e.g. `npm install -g node-inspect`) + and used with `node-inspect myscript.js`. + +#### [Chrome DevTools](https://github.com/ChromeDevTools/devtools-frontend) 55+ + +* **Option 1**: Open `chrome://inspect` in a Chromium-based + browser. Click the Configure button and ensure your target host and port + are listed. +* **Option 2**: Copy the `devtoolsFrontendUrl` from the output of `/json/list` + (see above) or the --inspect hint text and paste into Chrome. +* **Option 3**: Install the Chrome Extension NIM (Node Inspector Manager): + https://chrome.google.com/webstore/detail/nim-node-inspector-manage/gnhhdgbaldcilmgcpfddgdbkhjohddkj + +#### [Visual Studio Code](https://github.com/microsoft/vscode) 1.10+ + +* In the Debug panel, click the settings icon to open `.vscode/launch.json`. + Select "Node.js" for initial setup. + +#### [Visual Studio](https://github.com/Microsoft/nodejstools) 2017 + +* Choose "Debug > Start Debugging" from the menu or hit F5. +* [Detailed instructions](https://github.com/Microsoft/nodejstools/wiki/Debugging). + +#### [JetBrains WebStorm](https://www.jetbrains.com/webstorm/) 2017.1+ and other JetBrains IDEs + +* Create a new Node.js debug configuration and hit Debug. `--inspect` will be used + by default for Node.js 7+. To disable uncheck `js.debugger.node.use.inspect` in + the IDE Registry. + +#### [chrome-remote-interface](https://github.com/cyrus-and/chrome-remote-interface) + +* Library to ease connections to Inspector Protocol endpoints. + +#### [Gitpod](https://www.gitpod.io) + +* Start a Node.js debug configuration from the `Debug` view or hit `F5`. [Detailed instructions](https://medium.com/gitpod/debugging-node-js-applications-in-theia-76c94c76f0a1) + +--- + +## Command-line options + +The following table lists the impact of various runtime flags on debugging: + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagMeaning
--inspect +
    +
  • Enable inspector agent
  • +
  • Listen on default address and port (127.0.0.1:9229)
  • +
+
--inspect=[host:port] +
    +
  • Enable inspector agent
  • +
  • Bind to address or hostname host (default: 127.0.0.1)
  • +
  • Listen on port port (default: 9229)
  • +
+
--inspect-brk +
    +
  • Enable inspector agent
  • +
  • Listen on default address and port (127.0.0.1:9229)
  • +
  • Break before user code starts
  • +
+
--inspect-brk=[host:port] +
    +
  • Enable inspector agent
  • +
  • Bind to address or hostname host (default: 127.0.0.1)
  • +
  • Listen on port port (default: 9229)
  • +
  • Break before user code starts
  • +
+
node inspect script.js +
    +
  • Spawn child process to run user's script under --inspect flag; + and use main process to run CLI debugger.
  • +
+
node inspect --port=xxxx script.js +
    +
  • Spawn child process to run user's script under --inspect flag; + and use main process to run CLI debugger.
  • +
  • Listen on port port (default: 9229)
  • +
+
+ +--- + +## Enabling remote debugging scenarios + +We recommend that you never have the debugger listen on a public IP address. If +you need to allow remote debugging connections we recommend the use of ssh +tunnels instead. We provide the following example for illustrative purposes only. +Please understand the security risk of allowing remote access to a privileged +service before proceeding. + +Let's say you are running Node on remote machine, remote.example.com, that you +want to be able to debug. On that machine, you should start the node process +with the inspector listening only to localhost (the default). + +```bash +$ node --inspect server.js +``` + +Now, on your local machine from where you want to initiate a debug client +connection, you can setup an ssh tunnel: + +```bash +$ ssh -L 9221:localhost:9229 user@remote.example.com +``` + +This starts a ssh tunnel session where a connection to port 9221 on your local +machine will be forwarded to port 9229 on remote.example.com. You can now attach +a debugger such as Chrome DevTools or Visual Studio Code to localhost:9221, +which should be able to debug as if the Node.js application was running locally. + +--- + +## Legacy Debugger + +**The legacy debugger has been deprecated as of Node 7.7.0. Please use --inspect +and Inspector instead.** + +When started with the **--debug** or **--debug-brk** switches in version 7 and +earlier, Node.js listens for debugging commands defined by the discontinued +V8 Debugging Protocol on a TCP port, by default `5858`. Any debugger client +which speaks this protocol can connect to and debug the running process; a +couple popular ones are listed below. + +The V8 Debugging Protocol is no longer maintained or documented. + +#### [Built-in Debugger](https://nodejs.org/dist/latest-v6.x/docs/api/debugger.html) + +Start `node debug script_name.js` to start your script under Node's builtin +command-line debugger. Your script starts in another Node process started with +the `--debug-brk` option, and the initial Node process runs the `_debugger.js` +script and connects to your target. + +#### [node-inspector](https://github.com/node-inspector/node-inspector) + +Debug your Node.js app with Chrome DevTools by using an intermediary process +which translates the Inspector Protocol used in Chromium to the V8 Debugger +protocol used in Node.js. + + + +[Inspector Protocol]: https://chromedevtools.github.io/debugger-protocol-viewer/v8/ +[UUID]: https://tools.ietf.org/html/rfc4122 diff --git a/locale/pt-br/docs/guides/diagnostics-flamegraph.md b/locale/pt-br/docs/guides/diagnostics-flamegraph.md new file mode 100644 index 0000000000000..159e7d028a070 --- /dev/null +++ b/locale/pt-br/docs/guides/diagnostics-flamegraph.md @@ -0,0 +1,120 @@ +--- +title: Diagnostics - Flame Graphs +layout: docs.hbs +--- + +# Flame Graphs + +## What's a flame graph useful for? + +Flame graphs are a way of visualizing CPU time spent in functions. They can help you pin down where you spend too much time doing synchronous operations. + +## How to create a flame graph + +You might have heard creating a flame graph for Node.js is difficult, but that's not true (anymore). +Solaris vms are no longer needed for flame graphs! + +Flame graphs are generated from `perf` output, which is not a node-specific tool. While it's the most powerful way to visualize CPU time spent, it may have issues with how JavaScript code is optimized in Node.js 8 and above. See [perf output issues](#perf-output-issues) section below. + +### Use a pre-packaged tool + +If you want a single step that produces a flame graph locally, try [0x](https://www.npmjs.com/package/0x) + +For diagnosing production deployments, read these notes: [0x production servers](https://github.com/davidmarkclements/0x/blob/master/docs/production-servers.md) + +### Create a flame graph with system perf tools + +The purpose of this guide is to show steps involved in creating a flame graph and keep you in control of each step. + +If you want to understand each step better take a look at the sections that follow were we go into more detail. + +Now let's get to work. + +1. Install `perf` (usually available through the linux-tools-common package if not already installed) +2. try running `perf` - it might complain about missing kernel modules, install them too +3. run node with perf enabled (see [perf output issues](#perf-output-issues) for tips specific to Node.js versions) +```bash +perf record -e cycles:u -g -- node --perf-basic-prof app.js +``` +4. disregard warnings unless they're saying you can't run perf due to missing packages; you may get some warnings about not being able to access kernel module samples which you're not after anyway. +5. Run `perf script > perfs.out` to generate the data file you'll visualize in a moment. It's useful to [apply some cleanup](#filtering-out-node-internal-functions) for a more readable graph +6. install stackvis if not yet installed `npm i -g stackvis` +7. run `stackvis perf < perfs.out > flamegraph.htm` + +Now open the flame graph file in your favorite browser and watch it burn. It's color-coded so you can focus on the most saturated orange bars first. They're likely to represent CPU heavy functions. + +Worth mentioning - if you click an element of a flame graph a zoom-in of its surroundings will get displayed above the graph. + +### Using `perf` to sample a running process + +This is great for recording flame graph data from an already running process that you don't want to interrupt. Imagine a production process with a hard to reproduce issue. + +```bash +perf record -F99 -p `pgrep -n node` -g -- sleep 3 +``` + +Wait, what is that `sleep 3` for? It's there to keep the perf running - despite `-p` option pointing to a different pid, the command needs to be executed on a process and end with it. +perf runs for the life of the command you pass to it, whether or not you're actually profiling that command. `sleep 3` ensures that perf runs for 3 seconds. + +Why is `-F` (profiling frequency) set to 99? It's a reasonable default. You can adjust if you want. +`-F99` tells perf to take 99 samples per second, for more precision increase the value. Lower values should produce less output with less precise results. Precision you need depends on how long your CPU intensive functions really run. If you're looking for the reason of a noticeable slowdown, 99 frames per second should be more than enough. + +After you get that 3 second perf record, proceed with generating the flame graph with the last two steps from above. + +### Filtering out Node.js internal functions + +Usually you just want to look at the performance of your own calls, so filtering out Node.js and V8 internal functions can make the graph much easier to read. You can clean up your perf file with: + +```bash +sed -i \ + -e "/( __libc_start| LazyCompile | v8::internal::| Builtin:| Stub:| LoadIC:|\[unknown\]| LoadPolymorphicIC:)/d" \ + -e 's/ LazyCompile:[*~]\?/ /' \ + perfs.out +``` + +If you read your flame graph and it seems odd, as if something is missing in the key function taking up most time, try generating your flame graph without the filters - maybe you got a rare case of an issue with Node.js itself. + +### Node.js's profiling options + +`--perf-basic-prof-only-functions` and `--perf-basic-prof` are the two that are useful for debugging your JavaScript code. Other options are used for profiling Node.js itself, which is outside the scope of this guide. + +`--perf-basic-prof-only-functions` produces less output, so it's the option with least overhead. + +### Why do I need them at all? + +Well, without these options you'll still get a flame graph, but with most bars labeled `v8::Function::Call`. + +## `perf` output issues + +### Node.js 8.x V8 pipeline changes + +Node.js 8.x and above ships with new optimizations to JavaScript compilation pipeline in V8 engine which makes function names/references unreachable for perf sometimes. (It's called Turbofan) + +The result is you might not get your function names right in the flame graph. + +You'll notice `ByteCodeHandler:` where you'd expect function names. + +[0x](https://www.npmjs.com/package/0x) has some mitigations for that built in. + +For details see: +- https://github.com/nodejs/benchmarking/issues/168 +- https://github.com/nodejs/diagnostics/issues/148#issuecomment-369348961 + +### Node.js 10+ + +Node.js 10.x addresses the issue with Turbofan using the `--interpreted-frames-native-stack` flag. + +Run `node --interpreted-frames-native-stack --perf-basic-prof-only-functions` to get function names in the flame graph regardless of which pipeline V8 used to compile your JavaScript. + +### Broken labels in the flame graph + +If you're seeing labels looking like this +``` +node`_ZN2v88internal11interpreter17BytecodeGenerator15VisitStatementsEPNS0_8ZoneListIPNS0_9StatementEEE +``` +it means the Linux perf you're using was not compiled with demangle support, see https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1396654 for example + + +## Examples + +Practice capturing flame graphs yourself with [a flame graph exercise](https://github.com/naugtur/node-example-flamegraph)! diff --git a/locale/pt-br/docs/guides/domain-postmortem.md b/locale/pt-br/docs/guides/domain-postmortem.md new file mode 100644 index 0000000000000..6426c2a73361a --- /dev/null +++ b/locale/pt-br/docs/guides/domain-postmortem.md @@ -0,0 +1,444 @@ +--- +title: Domain Module Postmortem +layout: docs.hbs +--- + +# Domain Module Postmortem + +## Usability Issues + +### Implicit Behavior + +It's possible for a developer to create a new domain and then simply run +`domain.enter()`. Which then acts as a catch-all for any exception in the +future that couldn't be observed by the thrower. Allowing a module author to +intercept the exceptions of unrelated code in a different module. Preventing +the originator of the code from knowing about its own exceptions. + +Here's an example of how one indirectly linked modules can affect another: + +```js +// module a.js +const b = require('./b'); +const c = require('./c'); + + +// module b.js +const d = require('domain').create(); +d.on('error', () => { /* silence everything */ }); +d.enter(); + + +// module c.js +const dep = require('some-dep'); +dep.method(); // Uh-oh! This method doesn't actually exist. +``` + +Since module `b` enters the domain but never exits any uncaught exception will +be swallowed. Leaving module `c` in the dark as to why it didn't run the entire +script. Leaving a potentially partially populated `module.exports`. Doing this +is not the same as listening for `'uncaughtException'`. As the latter is +explicitly meant to globally catch errors. The other issue is that domains are +processed prior to any `'uncaughtException'` handlers, and prevent them from +running. + +Another issue is that domains route errors automatically if no `'error'` +handler was set on the event emitter. There is no opt-in mechanism for this, +and automatically propagates across the entire asynchronous chain. This may +seem useful at first, but once asynchronous calls are two or more modules deep +and one of them doesn't include an error handler the creator of the domain will +suddenly be catching unexpected exceptions, and the thrower's exception will go +unnoticed by the author. + +The following is a simple example of how a missing `'error'` handler allows +the active domain to hijack the error: + +```js +const domain = require('domain'); +const net = require('net'); +const d = domain.create(); +d.on('error', (err) => console.error(err.message)); + +d.run(() => net.createServer((c) => { + c.end(); + c.write('bye'); +}).listen(8000)); +``` + +Even manually removing the connection via `d.remove(c)` does not prevent the +connection's error from being automatically intercepted. + +Failures that plagues both error routing and exception handling are the +inconsistencies in how errors are bubbled. The following is an example of how +nested domains will and won't bubble the exception based on when they happen: + +```js +const domain = require('domain'); +const net = require('net'); +const d = domain.create(); +d.on('error', () => console.error('d intercepted an error')); + +d.run(() => { + const server = net.createServer((c) => { + const e = domain.create(); // No 'error' handler being set. + e.run(() => { + // This will not be caught by d's error handler. + setImmediate(() => { + throw new Error('thrown from setImmediate'); + }); + // Though this one will bubble to d's error handler. + throw new Error('immediately thrown'); + }); + }).listen(8080); +}); +``` + +It may be expected that nested domains always remain nested, and will always +propagate the exception up the domain stack. Or that exceptions will never +automatically bubble. Unfortunately both these situations occur, leading to +potentially confusing behavior that may even be prone to difficult to debug +timing conflicts. + + +### API Gaps + +While APIs based on using `EventEmitter` can use `bind()` and errback style +callbacks can use `intercept()`, alternative APIs that implicitly bind to the +active domain must be executed inside of `run()`. Meaning if module authors +wanted to support domains using a mechanism alternative to those mentioned they +must manually implement domain support themselves. Instead of being able to +leverage the implicit mechanisms already in place. + + +### Error Propagation + +Propagating errors across nested domains is not straight forward, if even +possible. Existing documentation shows a simple example of how to `close()` an +`http` server if there is an error in the request handler. What it does not +explain is how to close the server if the request handler creates another +domain instance for another async request. Using the following as a simple +example of the failing of error propagation: + +```js +const d1 = domain.create(); +d1.foo = true; // custom member to make more visible in console +d1.on('error', (er) => { /* handle error */ }); + +d1.run(() => setTimeout(() => { + const d2 = domain.create(); + d2.bar = 43; + d2.on('error', (er) => console.error(er.message, domain._stack)); + d2.run(() => { + setTimeout(() => { + setTimeout(() => { + throw new Error('outer'); + }); + throw new Error('inner'); + }); + }); +})); +``` + +Even in the case that the domain instances are being used for local storage so +access to resources are made available there is still no way to allow the error +to continue propagating from `d2` back to `d1`. Quick inspection may tell us +that simply throwing from `d2`'s domain `'error'` handler would allow `d1` to +then catch the exception and execute its own error handler. Though that is not +the case. Upon inspection of `domain._stack` you'll see that the stack only +contains `d2`. + +This may be considered a failing of the API, but even if it did operate in this +way there is still the issue of transmitting the fact that a branch in the +asynchronous execution has failed, and that all further operations in that +branch must cease. In the example of the http request handler, if we fire off +several asynchronous requests and each one then `write()`'s data back to the +client many more errors will arise from attempting to `write()` to a closed +handle. More on this in _Resource Cleanup on Exception_. + + +### Resource Cleanup on Exception + +The following script contains a more complex example of properly cleaning up +in a small resource dependency tree in the case that an exception occurs in a +given connection or any of its dependencies. Breaking down the script into its +basic operations: + +```js +'use strict'; + +const domain = require('domain'); +const EE = require('events'); +const fs = require('fs'); +const net = require('net'); +const util = require('util'); +const print = process._rawDebug; + +const pipeList = []; +const FILENAME = '/tmp/tmp.tmp'; +const PIPENAME = '/tmp/node-domain-example-'; +const FILESIZE = 1024; +let uid = 0; + +// Setting up temporary resources +const buf = Buffer.alloc(FILESIZE); +for (let i = 0; i < buf.length; i++) + buf[i] = ((Math.random() * 1e3) % 78) + 48; // Basic ASCII +fs.writeFileSync(FILENAME, buf); + +function ConnectionResource(c) { + EE.call(this); + this._connection = c; + this._alive = true; + this._domain = domain.create(); + this._id = Math.random().toString(32).substr(2).substr(0, 8) + (++uid); + + this._domain.add(c); + this._domain.on('error', () => { + this._alive = false; + }); +} +util.inherits(ConnectionResource, EE); + +ConnectionResource.prototype.end = function end(chunk) { + this._alive = false; + this._connection.end(chunk); + this.emit('end'); +}; + +ConnectionResource.prototype.isAlive = function isAlive() { + return this._alive; +}; + +ConnectionResource.prototype.id = function id() { + return this._id; +}; + +ConnectionResource.prototype.write = function write(chunk) { + this.emit('data', chunk); + return this._connection.write(chunk); +}; + +// Example begin +net.createServer((c) => { + const cr = new ConnectionResource(c); + + const d1 = domain.create(); + fs.open(FILENAME, 'r', d1.intercept((fd) => { + streamInParts(fd, cr, 0); + })); + + pipeData(cr); + + c.on('close', () => cr.end()); +}).listen(8080); + +function streamInParts(fd, cr, pos) { + const d2 = domain.create(); + const alive = true; + d2.on('error', (er) => { + print('d2 error:', er.message); + cr.end(); + }); + fs.read(fd, Buffer.alloc(10), 0, 10, pos, d2.intercept((bRead, buf) => { + if (!cr.isAlive()) { + return fs.close(fd); + } + if (cr._connection.bytesWritten < FILESIZE) { + // Documentation says callback is optional, but doesn't mention that if + // the write fails an exception will be thrown. + const goodtogo = cr.write(buf); + if (goodtogo) { + setTimeout(() => streamInParts(fd, cr, pos + bRead), 1000); + } else { + cr._connection.once('drain', () => streamInParts(fd, cr, pos + bRead)); + } + return; + } + cr.end(buf); + fs.close(fd); + })); +} + +function pipeData(cr) { + const pname = PIPENAME + cr.id(); + const ps = net.createServer(); + const d3 = domain.create(); + const connectionList = []; + d3.on('error', (er) => { + print('d3 error:', er.message); + cr.end(); + }); + d3.add(ps); + ps.on('connection', (conn) => { + connectionList.push(conn); + conn.on('data', () => {}); // don't care about incoming data. + conn.on('close', () => { + connectionList.splice(connectionList.indexOf(conn), 1); + }); + }); + cr.on('data', (chunk) => { + for (let i = 0; i < connectionList.length; i++) { + connectionList[i].write(chunk); + } + }); + cr.on('end', () => { + for (let i = 0; i < connectionList.length; i++) { + connectionList[i].end(); + } + ps.close(); + }); + pipeList.push(pname); + ps.listen(pname); +} + +process.on('SIGINT', () => process.exit()); +process.on('exit', () => { + try { + for (let i = 0; i < pipeList.length; i++) { + fs.unlinkSync(pipeList[i]); + } + fs.unlinkSync(FILENAME); + } catch (e) { } +}); + +``` + +- When a new connection happens, concurrently: + - Open a file on the file system + - Open Pipe to unique socket +- Read a chunk of the file asynchronously +- Write chunk to both the TCP connection and any listening sockets +- If any of these resources error, notify all other attached resources that + they need to clean up and shutdown + +As we can see from this example a lot more must be done to properly clean up +resources when something fails than what can be done strictly through the +domain API. All that domains offer is an exception aggregation mechanism. Even +the potentially useful ability to propagate data with the domain is easily +countered, in this example, by passing the needed resources as a function +argument. + +One problem domains perpetuated was the supposed simplicity of being able to +continue execution, contrary to what the documentation stated, of the +application despite an unexpected exception. This example demonstrates the +fallacy behind that idea. + +Attempting proper resource cleanup on unexpected exception becomes more complex +as the application itself grows in complexity. This example only has 3 basic +resources in play, and all of them with a clear dependency path. If an +application uses something like shared resources or resource reuse the ability +to cleanup, and properly test that cleanup has been done, grows greatly. + +In the end, in terms of handling errors, domains aren't much more than a +glorified `'uncaughtException'` handler. Except with more implicit and +unobservable behavior by third-parties. + + +### Resource Propagation + +Another use case for domains was to use it to propagate data along asynchronous +data paths. One problematic point is the ambiguity of when to expect the +correct domain when there are multiple in the stack (which must be assumed if +the async stack works with other modules). Also the conflict between being +able to depend on a domain for error handling while also having it available to +retrieve the necessary data. + +The following is a involved example demonstrating the failing using domains to +propagate data along asynchronous stacks: + +```js +const domain = require('domain'); +const net = require('net'); + +const server = net.createServer((c) => { + // Use a domain to propagate data across events within the + // connection so that we don't have to pass arguments + // everywhere. + const d = domain.create(); + d.data = { connection: c }; + d.add(c); + // Mock class that does some useless async data transformation + // for demonstration purposes. + const ds = new DataStream(dataTransformed); + c.on('data', (chunk) => ds.data(chunk)); +}).listen(8080, () => console.log('listening on 8080')); + +function dataTransformed(chunk) { + // FAIL! Because the DataStream instance also created a + // domain we have now lost the active domain we had + // hoped to use. + domain.active.data.connection.write(chunk); +} + +function DataStream(cb) { + this.cb = cb; + // DataStream wants to use domains for data propagation too! + // Unfortunately this will conflict with any domain that + // already exists. + this.domain = domain.create(); + this.domain.data = { inst: this }; +} + +DataStream.prototype.data = function data(chunk) { + // This code is self contained, but pretend it's a complex + // operation that crosses at least one other module. So + // passing along "this", etc., is not easy. + this.domain.run(() => { + // Simulate an async operation that does the data transform. + setImmediate(() => { + for (let i = 0; i < chunk.length; i++) + chunk[i] = ((chunk[i] + Math.random() * 100) % 96) + 33; + // Grab the instance from the active domain and use that + // to call the user's callback. + const self = domain.active.data.inst; + self.cb(chunk); + }); + }); +}; +``` + +The above shows that it is difficult to have more than one asynchronous API +attempt to use domains to propagate data. This example could possibly be fixed +by assigning `parent: domain.active` in the `DataStream` constructor. Then +restoring it via `domain.active = domain.active.data.parent` just before the +user's callback is called. Also the instantiation of `DataStream` in the +`'connection'` callback must be run inside `d.run()`, instead of simply using +`d.add(c)`, otherwise there will be no active domain. + +In short, for this to have a prayer of a chance usage would need to strictly +adhere to a set of guidelines that would be difficult to enforce or test. + + +## Performance Issues + +A significant deterrent from using domains is the overhead. Using node's +built-in http benchmark, `http_simple.js`, without domains it can handle over +22,000 requests/second. Whereas if it's run with `NODE_USE_DOMAINS=1` that +number drops down to under 17,000 requests/second. In this case there is only +a single global domain. If we edit the benchmark so the http request callback +creates a new domain instance performance drops further to 15,000 +requests/second. + +While this probably wouldn't affect a server only serving a few hundred or even +a thousand requests per second, the amount of overhead is directly proportional +to the number of asynchronous requests made. So if a single connection needs to +connect to several other services all of those will contribute to the overall +latency of delivering the final product to the client. + +Using `AsyncWrap` and tracking the number of times +`init`/`pre`/`post`/`destroy` are called in the mentioned benchmark we find +that the sum of all events called is over 170,000 times per second. This means +even adding 1 microsecond overhead per call for any type of setup or tear down +will result in a 17% performance loss. Granted, this is for the optimized +scenario of the benchmark, but I believe this demonstrates the necessity for a +mechanism such as domain to be as cheap to run as possible. + + +## Looking Ahead + +The domain module has been soft deprecated since Dec 2014, but has not yet been +removed because node offers no alternative functionality at the moment. As of +this writing there is ongoing work building out the `AsyncWrap` API and a +proposal for Zones being prepared for the TC39. At such time there is suitable +functionality to replace domains it will undergo the full deprecation cycle and +eventually be removed from core. diff --git a/locale/pt-br/docs/guides/dont-block-the-event-loop.md b/locale/pt-br/docs/guides/dont-block-the-event-loop.md new file mode 100644 index 0000000000000..e720f73b5d01b --- /dev/null +++ b/locale/pt-br/docs/guides/dont-block-the-event-loop.md @@ -0,0 +1,476 @@ +--- +title: Don't Block the Event Loop (or the Worker Pool) +layout: docs.hbs +--- + +# Don't Block the Event Loop (or the Worker Pool) + +## Should you read this guide? +If you're writing anything more complicated than a brief command-line script, reading this should help you write higher-performance, more-secure applications. + +This document is written with Node servers in mind, but the concepts apply to complex Node applications as well. +Where OS-specific details vary, this document is Linux-centric. + +## Summary +Node.js runs JavaScript code in the Event Loop (initialization and callbacks), and offers a Worker Pool to handle expensive tasks like file I/O. +Node scales well, sometimes better than more heavyweight approaches like Apache. +The secret to Node's scalability is that it uses a small number of threads to handle many clients. +If Node can make do with fewer threads, then it can spend more of your system's time and memory working on clients rather than on paying space and time overheads for threads (memory, context-switching). +But because Node has only a few threads, you must structure your application to use them wisely. + +Here's a good rule of thumb for keeping your Node server speedy: +*Node is fast when the work associated with each client at any given time is "small"*. + +This applies to callbacks on the Event Loop and tasks on the Worker Pool. + +## Why should I avoid blocking the Event Loop and the Worker Pool? +Node uses a small number of threads to handle many clients. +In Node there are two types of threads: one Event Loop (aka the main loop, main thread, event thread, etc.), and a pool of `k` Workers in a Worker Pool (aka the threadpool). + +If a thread is taking a long time to execute a callback (Event Loop) or a task (Worker), we call it "blocked". +While a thread is blocked working on behalf of one client, it cannot handle requests from any other clients. +This provides two motivations for blocking neither the Event Loop nor the Worker Pool: + +1. Performance: If you regularly perform heavyweight activity on either type of thread, the *throughput* (requests/second) of your server will suffer. +2. Security: If it is possible that for certain input one of your threads might block, a malicious client could submit this "evil input", make your threads block, and keep them from working on other clients. This would be a [Denial of Service](https://en.wikipedia.org/wiki/Denial-of-service_attack) attack. + +## A quick review of Node + +Node uses the Event-Driven Architecture: it has an Event Loop for orchestration and a Worker Pool for expensive tasks. + +### What code runs on the Event Loop? +When they begin, Node applications first complete an initialization phase, `require`'ing modules and registering callbacks for events. +Node applications then enter the Event Loop, responding to incoming client requests by executing the appropriate callback. +This callback executes synchronously, and may register asynchronous requests to continue processing after it completes. +The callbacks for these asynchronous requests will also be executed on the Event Loop. + +The Event Loop will also fulfill the non-blocking asynchronous requests made by its callbacks, e.g., network I/O. + +In summary, the Event Loop executes the JavaScript callbacks registered for events, and is also responsible for fulfilling non-blocking asynchronous requests like network I/O. + +### What code runs on the Worker Pool? +Node's Worker Pool is implemented in libuv ([docs](http://docs.libuv.org/en/v1.x/threadpool.html)), which exposes a general task submission API. + +Node uses the Worker Pool to handle "expensive" tasks. +This includes I/O for which an operating system does not provide a non-blocking version, as well as particularly CPU-intensive tasks. + +These are the Node module APIs that make use of this Worker Pool: +1. I/O-intensive + 1. [DNS](https://nodejs.org/api/dns.html): `dns.lookup()`, `dns.lookupService()`. + 2. [File System](https://nodejs.org/api/fs.html#fs_threadpool_usage): All file system APIs except `fs.FSWatcher()` and those that are explicitly synchronous use libuv's threadpool. +2. CPU-intensive + 1. [Crypto](https://nodejs.org/api/crypto.html): `crypto.pbkdf2()`, `crypto.scrypt()`, `crypto.randomBytes()`, `crypto.randomFill()`, `crypto.generateKeyPair()`. + 2. [Zlib](https://nodejs.org/api/zlib.html#zlib_threadpool_usage): All zlib APIs except those that are explicitly synchronous use libuv's threadpool. + +In many Node applications, these APIs are the only sources of tasks for the Worker Pool. Applications and modules that use a [C++ add-on](https://nodejs.org/api/addons.html) can submit other tasks to the Worker Pool. + +For the sake of completeness, we note that when you call one of these APIs from a callback on the Event Loop, the Event Loop pays some minor setup costs as it enters the Node C++ bindings for that API and submits a task to the Worker Pool. +These costs are negligible compared to the overall cost of the task, which is why the Event Loop is offloading it. +When submitting one of these tasks to the Worker Pool, Node provides a pointer to the corresponding C++ function in the Node C++ bindings. + +### How does Node decide what code to run next? +Abstractly, the Event Loop and the Worker Pool maintain queues for pending events and pending tasks, respectively. + +In truth, the Event Loop does not actually maintain a queue. +Instead, it has a collection of file descriptors that it asks the operating system to monitor, using a mechanism like [epoll](http://man7.org/linux/man-pages/man7/epoll.7.html) (Linux), [kqueue](https://developer.apple.com/library/content/documentation/Darwin/Conceptual/FSEvents_ProgGuide/KernelQueues/KernelQueues.html) (OSX), event ports (Solaris), or [IOCP](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365198.aspx) (Windows). +These file descriptors correspond to network sockets, any files it is watching, and so on. +When the operating system says that one of these file descriptors is ready, the Event Loop translates it to the appropriate event and invokes the callback(s) associated with that event. +You can learn more about this process [here](https://www.youtube.com/watch?v=P9csgxBgaZ8). + +In contrast, the Worker Pool uses a real queue whose entries are tasks to be processed. +A Worker pops a task from this queue and works on it, and when finished the Worker raises an "At least one task is finished" event for the Event Loop. + +### What does this mean for application design? +In a one-thread-per-client system like Apache, each pending client is assigned its own thread. +If a thread handling one client blocks, the operating system will interrupt it and give another client a turn. +The operating system thus ensures that clients that require a small amount of work are not penalized by clients that require more work. + +Because Node handles many clients with few threads, if a thread blocks handling one client's request, then pending client requests may not get a turn until the thread finishes its callback or task. +*The fair treatment of clients is thus the responsibility of your application*. +This means that you shouldn't do too much work for any client in any single callback or task. + +This is part of why Node can scale well, but it also means that you are responsible for ensuring fair scheduling. +The next sections talk about how to ensure fair scheduling for the Event Loop and for the Worker Pool. + +## Don't block the Event Loop +The Event Loop notices each new client connection and orchestrates the generation of a response. +All incoming requests and outgoing responses pass through the Event Loop. +This means that if the Event Loop spends too long at any point, all current and new clients will not get a turn. + +You should make sure you never block the Event Loop. +In other words, each of your JavaScript callbacks should complete quickly. +This of course also applies to your `await`'s, your `Promise.then`'s, and so on. + +A good way to ensure this is to reason about the ["computational complexity"](https://en.wikipedia.org/wiki/Time_complexity) of your callbacks. +If your callback takes a constant number of steps no matter what its arguments are, then you'll always give every pending client a fair turn. +If your callback takes a different number of steps depending on its arguments, then you should think about how long the arguments might be. + +Example 1: A constant-time callback. + +```javascript +app.get('/constant-time', (req, res) => { + res.sendStatus(200); +}); +``` + +Example 2: An `O(n)` callback. This callback will run quickly for small `n` and more slowly for large `n`. + +```javascript +app.get('/countToN', (req, res) => { + let n = req.query.n; + + // n iterations before giving someone else a turn + for (let i = 0; i < n; i++) { + console.log(`Iter {$i}`); + } + + res.sendStatus(200); +}); +``` + +Example 3: An `O(n^2)` callback. This callback will still run quickly for small `n`, but for large `n` it will run much more slowly than the previous `O(n)` example. + +```javascript +app.get('/countToN2', (req, res) => { + let n = req.query.n; + + // n^2 iterations before giving someone else a turn + for (let i = 0; i < n; i++) { + for (let j = 0; j < n; j++) { + console.log(`Iter ${i}.${j}`); + } + } + + res.sendStatus(200); +}); +``` + +### How careful should you be? +Node uses the Google V8 engine for JavaScript, which is quite fast for many common operations. +Exceptions to this rule are regexps and JSON operations, discussed below. + +However, for complex tasks you should consider bounding the input and rejecting inputs that are too long. +That way, even if your callback has large complexity, by bounding the input you ensure the callback cannot take more than the worst-case time on the longest acceptable input. +You can then evaluate the worst-case cost of this callback and determine whether its running time is acceptable in your context. + +### Blocking the Event Loop: REDOS +One common way to block the Event Loop disastrously is by using a "vulnerable" [regular expression](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions). + +#### Avoiding vulnerable regular expressions +A regular expression (regexp) matches an input string against a pattern. +We usually think of a regexp match as requiring a single pass through the input string --- `O(n)` time where `n` is the length of the input string. +In many cases, a single pass is indeed all it takes. +Unfortunately, in some cases the regexp match might require an exponential number of trips through the input string --- `O(2^n)` time. +An exponential number of trips means that if the engine requires `x` trips to determine a match, it will need `2*x` trips if we add only one more character to the input string. +Since the number of trips is linearly related to the time required, the effect of this evaluation will be to block the Event Loop. + +A *vulnerable regular expression* is one on which your regular expression engine might take exponential time, exposing you to [REDOS](https://www.owasp.org/index.php/Regular_expression_Denial_of_Service_-_ReDoS) on "evil input". +Whether or not your regular expression pattern is vulnerable (i.e. the regexp engine might take exponential time on it) is actually a difficult question to answer, and varies depending on whether you're using Perl, Python, Ruby, Java, JavaScript, etc., but here are some rules of thumb that apply across all of these languages: + +1. Avoid nested quantifiers like `(a+)*`. Node's regexp engine can handle some of these quickly, but others are vulnerable. +2. Avoid OR's with overlapping clauses, like `(a|a)*`. Again, these are sometimes-fast. +3. Avoid using backreferences, like `(a.*) \1`. No regexp engine can guarantee evaluating these in linear time. +4. If you're doing a simple string match, use `indexOf` or the local equivalent. It will be cheaper and will never take more than `O(n)`. + +If you aren't sure whether your regular expression is vulnerable, remember that Node generally doesn't have trouble reporting a *match* even for a vulnerable regexp and a long input string. +The exponential behavior is triggered when there is a mismatch but Node can't be certain until it tries many paths through the input string. + +#### A REDOS example +Here is an example vulnerable regexp exposing its server to REDOS: + +```javascript +app.get('/redos-me', (req, res) => { + let filePath = req.query.filePath; + + // REDOS + if (fileName.match(/(\/.+)+$/)) { + console.log('valid path'); + } + else { + console.log('invalid path'); + } + + res.sendStatus(200); +}); +``` + +The vulnerable regexp in this example is a (bad!) way to check for a valid path on Linux. +It matches strings that are a sequence of "/"-delimited names, like "/a/b/c". +It is dangerous because it violates rule 1: it has a doubly-nested quantifier. + +If a client queries with filePath `///.../\n` (100 /'s followed by a newline character that the regexp's "." won't match), then the Event Loop will take effectively forever, blocking the Event Loop. +This client's REDOS attack causes all other clients not to get a turn until the regexp match finishes. + +For this reason, you should be leery of using complex regular expressions to validate user input. + +#### Anti-REDOS Resources +There are some tools to check your regexps for safety, like +- [safe-regex](https://github.com/substack/safe-regex) +- [rxxr2](http://www.cs.bham.ac.uk/~hxt/research/rxxr2/). +However, neither of these will catch all vulnerable regexps. + +Another approach is to use a different regexp engine. +You could use the [node-re2](https://github.com/uhop/node-re2) module, which uses Google's blazing-fast [RE2](https://github.com/google/re2) regexp engine. +But be warned, RE2 is not 100% compatible with Node's regexps, so check for regressions if you swap in the node-re2 module to handle your regexps. +And particularly complicated regexps are not supported by node-re2. + +If you're trying to match something "obvious", like a URL or a file path, find an example in a [regexp library](http://www.regexlib.com) or use an npm module, e.g. [ip-regex](https://www.npmjs.com/package/ip-regex). + +### Blocking the Event Loop: Node core modules +Several Node core modules have synchronous expensive APIs, including: +- [Encryption](https://nodejs.org/api/crypto.html) +- [Compression](https://nodejs.org/api/zlib.html) +- [File system](https://nodejs.org/api/fs.html) +- [Child process](https://nodejs.org/api/child_process.html) + +These APIs are expensive, because they involve significant computation (encryption, compression), require I/O (file I/O), or potentially both (child process). These APIs are intended for scripting convenience, but are not intended for use in the server context. If you execute them on the Event Loop, they will take far longer to complete than a typical JavaScript instruction, blocking the Event Loop. + +In a server, *you should not use the following synchronous APIs from these modules*: +- Encryption: + - `crypto.randomBytes` (synchronous version) + - `crypto.randomFillSync` + - `crypto.pbkdf2Sync` + - You should also be careful about providing large input to the encryption and decryption routines. +- Compression: + - `zlib.inflateSync` + - `zlib.deflateSync` +- File system: + - Do not use the synchronous file system APIs. For example, if the file you access is in a [distributed file system](https://en.wikipedia.org/wiki/Clustered_file_system#Distributed_file_systems) like [NFS](https://en.wikipedia.org/wiki/Network_File_System), access times can vary widely. +- Child process: + - `child_process.spawnSync` + - `child_process.execSync` + - `child_process.execFileSync` + +This list is reasonably complete as of Node v9. + +### Blocking the Event Loop: JSON DOS +`JSON.parse` and `JSON.stringify` are other potentially expensive operations. +While these are `O(n)` in the length of the input, for large `n` they can take surprisingly long. + +If your server manipulates JSON objects, particularly those from a client, you should be cautious about the size of the objects or strings you work with on the Event Loop. + +Example: JSON blocking. We create an object `obj` of size 2^21 and `JSON.stringify` it, run `indexOf` on the string, and then JSON.parse it. The `JSON.stringify`'d string is 50MB. It takes 0.7 seconds to stringify the object, 0.03 seconds to indexOf on the 50MB string, and 1.3 seconds to parse the string. + +```javascript +var obj = { a: 1 }; +var niter = 20; + +var before, res, took; + +for (var i = 0; i < niter; i++) { + obj = { obj1: obj, obj2: obj }; // Doubles in size each iter +} + +before = process.hrtime(); +res = JSON.stringify(obj); +took = process.hrtime(before); +console.log('JSON.stringify took ' + took); + +before = process.hrtime(); +res = str.indexOf('nomatch'); +took = process.hrtime(before); +console.log('Pure indexof took ' + took); + +before = process.hrtime(); +res = JSON.parse(str); +took = process.hrtime(before); +console.log('JSON.parse took ' + took); +``` + +There are npm modules that offer asynchronous JSON APIs. See for example: +- [JSONStream](https://www.npmjs.com/package/JSONStream), which has stream APIs. +- [Big-Friendly JSON](https://github.com/philbooth/bfj), which has stream APIs as well as asynchronous versions of the standard JSON APIs using the partitioning-on-the-Event-Loop paradigm outlined below. + +### Complex calculations without blocking the Event Loop +Suppose you want to do complex calculations in JavaScript without blocking the Event Loop. +You have two options: partitioning or offloading. + +#### Partitioning +You could *partition* your calculations so that each runs on the Event Loop but regularly yields (gives turns to) other pending events. +In JavaScript it's easy to save the state of an ongoing task in a closure, as shown in example 2 below. + +For a simple example, suppose you want to compute the average of the numbers `1` to `n`. + +Example 1: Un-partitioned average, costs `O(n)` +```javascript +for (let i = 0; i < n; i++) + sum += i; +let avg = sum / n; +console.log('avg: ' + avg); +``` + +Example 2: Partitioned average, each of the `n` asynchronous steps costs `O(1)`. +```javascript +function asyncAvg(n, avgCB) { + // Save ongoing sum in JS closure. + var sum = 0; + function help(i, cb) { + sum += i; + if (i == n) { + cb(sum); + return; + } + + // "Asynchronous recursion". + // Schedule next operation asynchronously. + setImmediate(help.bind(null, i+1, cb)); + } + + // Start the helper, with CB to call avgCB. + help(1, function(sum){ + var avg = sum/n; + avgCB(avg); + }); +} + +asyncAvg(n, function(avg){ + console.log('avg of 1-n: ' + avg); +}); +``` + +You can apply this principle to array iterations and so forth. + +#### Offloading +If you need to do something more complex, partitioning is not a good option. +This is because partitioning uses only the Event Loop, and you won't benefit from multiple cores almost certainly available on your machine. +*Remember, the Event Loop should orchestrate client requests, not fulfill them itself.* +For a complicated task, move the work off of the Event Loop onto a Worker Pool. + +##### How to offload +You have two options for a destination Worker Pool to which to offload work. +1. You can use the built-in Node Worker Pool by developing a [C++ addon](https://nodejs.org/api/addons.html). On older versions of Node, build your C++ addon using [NAN](https://github.com/nodejs/nan), and on newer versions use [N-API](https://nodejs.org/api/n-api.html). [node-webworker-threads](https://www.npmjs.com/package/webworker-threads) offers a JavaScript-only way to access Node's Worker Pool. +2. You can create and manage your own Worker Pool dedicated to computation rather than Node's I/O-themed Worker Pool. The most straightforward ways to do this is using [Child Process](https://nodejs.org/api/child_process.html) or [Cluster](https://nodejs.org/api/cluster.html). + +You should *not* simply create a [Child Process](https://nodejs.org/api/child_process.html) for every client. +You can receive client requests more quickly than you can create and manage children, and your server might become a [fork bomb](https://en.wikipedia.org/wiki/Fork_bomb). + +##### Downside of offloading +The downside of the offloading approach is that it incurs overhead in the form of *communication costs*. +Only the Event Loop is allowed to see the "namespace" (JavaScript state) of your application. +From a Worker, you cannot manipulate a JavaScript object in the Event Loop's namespace. +Instead, you have to serialize and deserialize any objects you wish to share. +Then the Worker can operate on its own copy of these object(s) and return the modified object (or a "patch") to the Event Loop. + +For serialization concerns, see the section on JSON DOS. + +##### Some suggestions for offloading +You may wish to distinguish between CPU-intensive and I/O-intensive tasks because they have markedly different characteristics. + +A CPU-intensive task only makes progress when its Worker is scheduled, and the Worker must be scheduled onto one of your machine's [logical cores](https://nodejs.org/api/os.html#os_os_cpus). +If you have 4 logical cores and 5 Workers, one of these Workers cannot make progress. +As a result, you are paying overhead (memory and scheduling costs) for this Worker and getting no return for it. + +I/O-intensive tasks involve querying an external service provider (DNS, file system, etc.) and waiting for its response. +While a Worker with an I/O-intensive task is waiting for its response, it has nothing else to do and can be de-scheduled by the operating system, giving another Worker a chance to submit their request. +Thus, *I/O-intensive tasks will be making progress even while the associated thread is not running*. +External service providers like databases and file systems have been highly optimized to handle many pending requests concurrently. +For example, a file system will examine a large set of pending write and read requests to merge conflicting updates and to retrieve files in an optimal order (e.g. see [these slides](http://researcher.ibm.com/researcher/files/il-AVISHAY/01-block_io-v1.3.pdf)). + +If you rely on only one Worker Pool, e.g. the Node Worker Pool, then the differing characteristics of CPU-bound and I/O-bound work may harm your application's performance. + +For this reason, you might wish to maintain a separate Computation Worker Pool. + +#### Offloading: conclusions +For simple tasks, like iterating over the elements of an arbitrarily long array, partitioning might be a good option. +If your computation is more complex, offloading is a better approach: the communication costs, i.e. the overhead of passing serialized objects between the Event Loop and the Worker Pool, are offset by the benefit of using multiple cores. + +However, if your server relies heavily on complex calculations, you should think about whether Node is really a good fit. Node excels for I/O-bound work, but for expensive computation it might not be the best option. + +If you take the offloading approach, see the section on not blocking the Worker Pool. + +## Don't block the Worker Pool +Node has a Worker Pool composed of `k` Workers. +If you are using the Offloading paradigm discussed above, you might have a separate Computational Worker Pool, to which the same principles apply. +In either case, let us assume that `k` is much smaller than the number of clients you might be handling concurrently. +This is in keeping with Node's "one thread for many clients" philosophy, the secret to its scalability. + +As discussed above, each Worker completes its current Task before proceeding to the next one on the Worker Pool queue. + +Now, there will be variation in the cost of the Tasks required to handle your clients' requests. +Some Tasks can be completed quickly (e.g. reading short or cached files, or producing a small number of random bytes), and others will take longer (e.g reading larger or uncached files, or generating more random bytes). +Your goal should be to *minimize the variation in Task times*, and you should use *Task partitioning* to accomplish this. + +### Minimizing the variation in Task times +If a Worker's current Task is much more expensive than other Tasks, then it will be unavailable to work on other pending Tasks. +In other words, *each relatively long Task effectively decreases the size of the Worker Pool by one until it is completed*. +This is undesirable because, up to a point, the more Workers in the Worker Pool, the greater the Worker Pool throughput (tasks/second) and thus the greater the server throughput (client requests/second). +One client with a relatively expensive Task will decrease the throughput of the Worker Pool, in turn decreasing the throughput of the server. + +To avoid this, you should try to minimize variation in the length of Tasks you submit to the Worker Pool. +While it is appropriate to treat the external systems accessed by your I/O requests (DB, FS, etc.) as black boxes, you should be aware of the relative cost of these I/O requests, and should avoid submitting requests you can expect to be particularly long. + +Two examples should illustrate the possible variation in task times. + +#### Variation example: Long-running file system reads +Suppose your server must read files in order to handle some client requests. +After consulting Node's [File system](https://nodejs.org/api/fs.html) APIs, you opted to use `fs.readFile()` for simplicity. +However, `fs.readFile()` is ([currently](https://github.com/nodejs/node/pull/17054)) not partitioned: it submits a single `fs.read()` Task spanning the entire file. +If you read shorter files for some users and longer files for others, `fs.readFile()` may introduce significant variation in Task lengths, to the detriment of Worker Pool throughput. + +For a worst-case scenario, suppose an attacker can convince your server to read an *arbitrary* file (this is a [directory traversal vulnerability](https://www.owasp.org/index.php/Path_Traversal)). +If your server is running Linux, the attacker can name an extremely slow file: [`/dev/random`](http://man7.org/linux/man-pages/man4/random.4.html). +For all practical purposes, `/dev/random` is infinitely slow, and every Worker asked to read from `/dev/random` will never finish that Task. +An attacker then submits `k` requests, one for each Worker, and no other client requests that use the Worker Pool will make progress. + +#### Variation example: Long-running crypto operations +Suppose your server generates cryptographically secure random bytes using [`crypto.randomBytes()`](https://nodejs.org/api/crypto.html#crypto_crypto_randombytes_size_callback). +`crypto.randomBytes()` is not partitioned: it creates a single `randomBytes()` Task to generate as many bytes as you requested. +If you create fewer bytes for some users and more bytes for others, `crypto.randomBytes()` is another source of variation in Task lengths. + +### Task partitioning +Tasks with variable time costs can harm the throughput of the Worker Pool. +To minimize variation in Task times, as far as possible you should *partition* each Task into comparable-cost sub-Tasks. +When each sub-Task completes it should submit the next sub-Task, and when the final sub-Task completes it should notify the submitter. + +To continue the `fs.readFile()` example, you should instead use `fs.read()` (manual partitioning) or `ReadStream` (automatically partitioned). + +The same principle applies to CPU-bound tasks; the `asyncAvg` example might be inappropriate for the Event Loop, but it is well suited to the Worker Pool. + +When you partition a Task into sub-Tasks, shorter Tasks expand into a small number of sub-Tasks, and longer Tasks expand into a larger number of sub-Tasks. +Between each sub-Task of a longer Task, the Worker to which it was assigned can work on a sub-Task from another, shorter, Task, thus improving the overall Task throughput of the Worker Pool. + +Note that the number of sub-Tasks completed is not a useful metric for the throughput of the Worker Pool. +Instead, concern yourself with the number of *Tasks* completed. + +### Avoiding Task partitioning +Recall that the purpose of Task partitioning is to minimize the variation in Task times. +If you can distinguish between shorter Tasks and longer Tasks (e.g. summing an array vs. sorting an array), you could create one Worker Pool for each class of Task. +Routing shorter Tasks and longer Tasks to separate Worker Pools is another way to minimize Task time variation. + +In favor of this approach, partitioning Tasks incurs overhead (the costs of creating a Worker Pool Task representation and of manipulating the Worker Pool queue), and avoiding partitioning saves you the costs of additional trips to the Worker Pool. +It also keeps you from making mistakes in partitioning your Tasks. + +The downside of this approach is that Workers in all of these Worker Pools will incur space and time overheads and will compete with each other for CPU time. +Remember that each CPU-bound Task makes progress only while it is scheduled. +As a result, you should only consider this approach after careful analysis. + +### Worker Pool: conclusions +Whether you use only the Node Worker Pool or maintain separate Worker Pool(s), you should optimize the Task throughput of your Pool(s). + +To do this, minimize the variation in Task times by using Task partitioning. + +## The risks of npm modules +While the Node core modules offer building blocks for a wide variety of applications, sometimes something more is needed. Node developers benefit tremendously from the [npm ecosystem](https://www.npmjs.com/), with hundreds of thousands of modules offering functionality to accelerate your development process. + +Remember, however, that the majority of these modules are written by third-party developers and are generally released with only best-effort guarantees. A developer using an npm module should be concerned about two things, though the latter is frequently forgotten. +1. Does it honor its APIs? +2. Might its APIs block the Event Loop or a Worker? +Many modules make no effort to indicate the cost of their APIs, to the detriment of the community. + +For simple APIs you can estimate the cost of the APIs; the cost of string manipulation isn't hard to fathom. +But in many cases it's unclear how much an API might cost. + +*If you are calling an API that might do something expensive, double-check the cost. Ask the developers to document it, or examine the source code yourself (and submit a PR documenting the cost).* + +Remember, even if the API is asynchronous, you don't know how much time it might spend on a Worker or on the Event Loop in each of its partitions. +For example, suppose in the `asyncAvg` example given above, each call to the helper function summed *half* of the numbers rather than one of them. +Then this function would still be asynchronous, but the cost of each partition would be `O(n)`, not `O(1)`, making it much less safe to use for arbitrary values of `n`. + +## Conclusion +Node has two types of threads: one Event Loop and `k` Workers. +The Event Loop is responsible for JavaScript callbacks and non-blocking I/O, and a Worker executes tasks corresponding to C++ code that completes an asynchronous request, including blocking I/O and CPU-intensive work. +Both types of threads work on no more than one activity at a time. +If any callback or task takes a long time, the thread running it becomes *blocked*. +If your application makes blocking callbacks or tasks, this can lead to degraded throughput (clients/second) at best, and complete denial of service at worst. + +To write a high-throughput, more DoS-proof web server, you must ensure that on benign and on malicious input, neither your Event Loop nor your Workers will block. diff --git a/locale/pt-br/docs/meta/topics/dependencies.md b/locale/pt-br/docs/meta/topics/dependencies.md new file mode 100644 index 0000000000000..8e3619f699042 --- /dev/null +++ b/locale/pt-br/docs/meta/topics/dependencies.md @@ -0,0 +1,102 @@ +--- +title: Dependencies +layout: docs.hbs +--- + +# Dependencies + +There are several dependencies that Node.js relies on to work the way it does. + +- [Libraries](#libraries) + - [V8](#v8) + - [libuv](#libuv) + - [http-parser](#http-parser) + - [c-ares](#c-ares) + - [OpenSSL](#openssl) + - [zlib](#zlib) +- [Tools](#tools) + - [npm](#npm) + - [gyp](#gyp) + - [gtest](#gtest) + +## Libraries + +### V8 + +The V8 library provides Node.js with a JavaScript engine, which Node.js +controls via the V8 C++ API. V8 is maintained by Google, for use in Chrome. + +- [Documentation](https://v8docs.nodesource.com/) + +### libuv + +Another important dependency is libuv, a C library that is used to abstract +non-blocking I/O operations to a consistent interface across all supported +platforms. It provides mechanisms to handle file system, DNS, network, child +processes, pipes, signal handling, polling and streaming. It also includes a +thread pool for offloading work for some things that can't be done +asynchronously at the operating system level. + +- [Documentation](http://docs.libuv.org/) + +### http-parser + +HTTP parsing is handled by a lightweight C library called http-parser. It is +designed to not make any syscalls or allocations, so it has a very small +per-request memory footprint. + +- [Documentation](https://github.com/joyent/http-parser/) + +### c-ares + +For some asynchronous DNS requests, Node.js uses a C library called c-ares. +It is exposed through the DNS module in JavaScript as the `resolve()` family of +functions. The `lookup()` function, which is what the rest of core uses, makes +use of threaded `getaddrinfo(3)` calls in libuv. The reason for this is that +c-ares supports /etc/hosts, /etc/resolv.conf and /etc/svc.conf, but not things +like mDNS. + +- [Documentation](http://c-ares.haxx.se/docs.html) + +### OpenSSL + +OpenSSL is used extensively in both the `tls` and `crypto` modules. It provides +battle-tested implementations of many cryptographic functions that the modern +web relies on for security. + +- [Documentation](https://www.openssl.org/docs/) + +### zlib + +For fast compression and decompression, Node.js relies on the industry-standard +zlib library, also known for its use in gzip and libpng. Node.js uses zlib to +create sync, async and streaming compression and decompression interfaces. + +- [Documentation](http://www.zlib.net/manual.html) + +## Tools + +### npm + +Node.js is all about modularity, and with that comes the need for a quality +package manager; for this purpose, npm was made. With npm comes the largest +selection of community-created packages of any programming ecosystem, +which makes building Node.js apps quick and easy. + +- [Documentation](https://docs.npmjs.com/) + +### gyp + +The build system is handled by gyp, a python-based project generator copied +from V8. It can generate project files for use with build systems across many +platforms. Node.js requires a build system because large parts of it — and its +dependencies — are written in languages that require compilation. + +- [Documentation](https://gyp.gsrc.io/docs/UserDocumentation.md) + +### gtest + +Native code can be tested using gtest, which is taken from Chromium. It allows +testing C/C++ without needing an existing node executable to bootstrap from. + +- [Documentation](https://code.google.com/p/googletest/wiki/V1_7_Documentation) From d4ad103c31dd59f369980ba5bc19abfe4edc06ed Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Fri, 14 Jun 2019 11:27:27 -0300 Subject: [PATCH 02/17] Translate dependencies from 3.1 --- locale/pt-br/docs/meta/topics/dependencies.md | 122 +++++++++--------- 1 file changed, 64 insertions(+), 58 deletions(-) diff --git a/locale/pt-br/docs/meta/topics/dependencies.md b/locale/pt-br/docs/meta/topics/dependencies.md index 8e3619f699042..10a7e90ef8ea6 100644 --- a/locale/pt-br/docs/meta/topics/dependencies.md +++ b/locale/pt-br/docs/meta/topics/dependencies.md @@ -1,102 +1,108 @@ --- -title: Dependencies +title: Dependências layout: docs.hbs --- -# Dependencies +# Dependências -There are several dependencies that Node.js relies on to work the way it does. +O Node.js precisa de diversas dependências para funcionar do jeito que funciona atualmente. -- [Libraries](#libraries) - - [V8](#v8) - - [libuv](#libuv) - - [http-parser](#http-parser) - - [c-ares](#c-ares) - - [OpenSSL](#openssl) - - [zlib](#zlib) -- [Tools](#tools) - - [npm](#npm) - - [gyp](#gyp) - - [gtest](#gtest) +- [Dependências](#dependências) + - [Bibliotecas](#bibliotecas) + - [V8](#v8) + - [libuv](#libuv) + - [http-parser](#http-parser) + - [c-ares](#c-ares) + - [OpenSSL](#openssl) + - [zlib](#zlib) + - [Ferramentas](#ferramentas) + - [npm](#npm) + - [gyp](#gyp) + - [gtest](#gtest) -## Libraries +## Bibliotecas ### V8 -The V8 library provides Node.js with a JavaScript engine, which Node.js -controls via the V8 C++ API. V8 is maintained by Google, for use in Chrome. +A biblioteca do V8 provê um engine Javascript para o Node.js, o qual +é controlado pela API C++ do próprio V8. O V8 é atualmente mantido +pelo Google, por conta de seu uso no navegador Chrome. -- [Documentation](https://v8docs.nodesource.com/) +- [Documentação](https://v8docs.nodesource.com/) ### libuv -Another important dependency is libuv, a C library that is used to abstract -non-blocking I/O operations to a consistent interface across all supported -platforms. It provides mechanisms to handle file system, DNS, network, child -processes, pipes, signal handling, polling and streaming. It also includes a -thread pool for offloading work for some things that can't be done -asynchronously at the operating system level. +Uma outra dependência importante é a *libuv*. Uma biblioteca escrita em C que +é utilizada para abstrair todas as operações que não bloqueiam o I/O para uma +interface consistente por todas as plataformas suportadas. Ela provê mecanismos +para lidar com o sistema de arquivos, DNS, rede, processos filhos, pipes, +tratamento de sinais, polling e streaming. Ela também inclui uma pool de threads +para distribuir o trabalho que não pode ser feito assíncronamente a nível de SO. -- [Documentation](http://docs.libuv.org/) +- [Documentação](http://docs.libuv.org/) ### http-parser -HTTP parsing is handled by a lightweight C library called http-parser. It is -designed to not make any syscalls or allocations, so it has a very small -per-request memory footprint. +O parsing do protocolo HTTP é delegado a uma biblioteca leve, escrita em C, +chamada *http-parser*. Ela foi desenhada para não fazer nenhuma syscall ou +alocações, portanto acaba possuindo um baixo consumo de memória por requisição. -- [Documentation](https://github.com/joyent/http-parser/) +- [Documentação](https://github.com/joyent/http-parser/) ### c-ares -For some asynchronous DNS requests, Node.js uses a C library called c-ares. -It is exposed through the DNS module in JavaScript as the `resolve()` family of -functions. The `lookup()` function, which is what the rest of core uses, makes -use of threaded `getaddrinfo(3)` calls in libuv. The reason for this is that -c-ares supports /etc/hosts, /etc/resolv.conf and /etc/svc.conf, but not things -like mDNS. +Para algumas requisições assíncronas de DNS, o Node.js utilizar uma biblioteca +escrita em C chamada *c-ares*. Ela é exposta através do módulo de DNS no Javascript +na família `resolve()` de funções. A função `lookup()`, que é o que o resto do core +do Node.js usa, faz uso de uma chamada `getaddrinfo(3)` que é processada em threads +na libuv. A razão por trás disso é que o c-ares suporta caminhos como +/etc/hosts, /etc/resolv.conf e /etc/svc.conf, mas não outras coisas como o mDNS. -- [Documentation](http://c-ares.haxx.se/docs.html) +- [Documentação](http://c-ares.haxx.se/docs.html) ### OpenSSL -OpenSSL is used extensively in both the `tls` and `crypto` modules. It provides -battle-tested implementations of many cryptographic functions that the modern -web relies on for security. +O OpenSSL é extensivamente usado nos módulos `tls` e `crypto` do Node.js. Ele provê +uma implementação altamente testada de várias funções criptográficas que a web +moderna utiliza em grande escala a fim de manter a segurança. -- [Documentation](https://www.openssl.org/docs/) +- [Documentação](https://www.openssl.org/docs/) ### zlib -For fast compression and decompression, Node.js relies on the industry-standard -zlib library, also known for its use in gzip and libpng. Node.js uses zlib to -create sync, async and streaming compression and decompression interfaces. +Para compressões e descompressões rápidas, o Node.js utiliza na biblioteca zlib, +que é o padrão da indústria, também conhecida pelo seu uso nas bibliotecas gzip e libpng. +O Node.js utiliza o zlib para criar interfaces de descompressão e compressão que podem +ser síncronas, assíncronas ou através de streaming. -- [Documentation](http://www.zlib.net/manual.html) +- [Documentação](http://www.zlib.net/manual.html) -## Tools +## Ferramentas ### npm -Node.js is all about modularity, and with that comes the need for a quality -package manager; for this purpose, npm was made. With npm comes the largest -selection of community-created packages of any programming ecosystem, -which makes building Node.js apps quick and easy. +Uma das grandes vantagens do Node.js é sua modularidade, e com isso vem a +necessidade de um gerenciador de pacotes de qualidade. O NPM foi justamente +criado para isto. Com ele, temos a maior seleção de pacotes criados pela +comunidade, maior do que em todos os outros ecossistemas existentes. Isto +faz com que construir uma aplicação Node.js seja fácil e rápida. -- [Documentation](https://docs.npmjs.com/) +- [Documentação](https://docs.npmjs.com/) ### gyp -The build system is handled by gyp, a python-based project generator copied -from V8. It can generate project files for use with build systems across many -platforms. Node.js requires a build system because large parts of it — and its -dependencies — are written in languages that require compilation. +Todo o sistema de build é gerenciado pelo gyp, um gerador de projetos baseado +em python que foi copiado do V8. Ele pode gerar arquivos de projeto para uso +em sistemas de build de diversas plataformas. O Node.js precisa disso porque +grande parte do próprio Node – e também de suas dependências – são escritas em +linguagens que requerem compilação, como C++. -- [Documentation](https://gyp.gsrc.io/docs/UserDocumentation.md) +- [Documentação](https://gyp.gsrc.io/docs/UserDocumentação.md) ### gtest -Native code can be tested using gtest, which is taken from Chromium. It allows -testing C/C++ without needing an existing node executable to bootstrap from. +O código nativo pode ser testado usando gtest, que foi tirado do Chromium. Ele +permite testes de C/C++ sem a necessidade de um executável node existente para +dar o bootstrap inicial. -- [Documentation](https://code.google.com/p/googletest/wiki/V1_7_Documentation) +- [Documentação](https://code.google.com/p/googletest/wiki/V1_7_Documentação) From 26d3eed13828043c8f5e790621ac5f270299f79b Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Fri, 14 Jun 2019 11:34:46 -0300 Subject: [PATCH 03/17] Add english comments --- locale/pt-br/docs/meta/topics/dependencies.md | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/locale/pt-br/docs/meta/topics/dependencies.md b/locale/pt-br/docs/meta/topics/dependencies.md index 10a7e90ef8ea6..1faca5ee6b46b 100644 --- a/locale/pt-br/docs/meta/topics/dependencies.md +++ b/locale/pt-br/docs/meta/topics/dependencies.md @@ -24,6 +24,11 @@ O Node.js precisa de diversas dependências para funcionar do jeito que funciona ### V8 + + A biblioteca do V8 provê um engine Javascript para o Node.js, o qual é controlado pela API C++ do próprio V8. O V8 é atualmente mantido pelo Google, por conta de seu uso no navegador Chrome. @@ -32,6 +37,15 @@ pelo Google, por conta de seu uso no navegador Chrome. ### libuv + + Uma outra dependência importante é a *libuv*. Uma biblioteca escrita em C que é utilizada para abstrair todas as operações que não bloqueiam o I/O para uma interface consistente por todas as plataformas suportadas. Ela provê mecanismos @@ -43,6 +57,12 @@ para distribuir o trabalho que não pode ser feito assíncronamente a nível de ### http-parser + + O parsing do protocolo HTTP é delegado a uma biblioteca leve, escrita em C, chamada *http-parser*. Ela foi desenhada para não fazer nenhuma syscall ou alocações, portanto acaba possuindo um baixo consumo de memória por requisição. @@ -51,6 +71,14 @@ alocações, portanto acaba possuindo um baixo consumo de memória por requisiç ### c-ares + Para algumas requisições assíncronas de DNS, o Node.js utilizar uma biblioteca escrita em C chamada *c-ares*. Ela é exposta através do módulo de DNS no Javascript na família `resolve()` de funções. A função `lookup()`, que é o que o resto do core @@ -62,6 +90,12 @@ na libuv. A razão por trás disso é que o c-ares suporta caminhos como ### OpenSSL + + O OpenSSL é extensivamente usado nos módulos `tls` e `crypto` do Node.js. Ele provê uma implementação altamente testada de várias funções criptográficas que a web moderna utiliza em grande escala a fim de manter a segurança. @@ -70,6 +104,12 @@ moderna utiliza em grande escala a fim de manter a segurança. ### zlib + + Para compressões e descompressões rápidas, o Node.js utiliza na biblioteca zlib, que é o padrão da indústria, também conhecida pelo seu uso nas bibliotecas gzip e libpng. O Node.js utiliza o zlib para criar interfaces de descompressão e compressão que podem @@ -81,6 +121,13 @@ ser síncronas, assíncronas ou através de streaming. ### npm + + Uma das grandes vantagens do Node.js é sua modularidade, e com isso vem a necessidade de um gerenciador de pacotes de qualidade. O NPM foi justamente criado para isto. Com ele, temos a maior seleção de pacotes criados pela @@ -91,6 +138,13 @@ faz com que construir uma aplicação Node.js seja fácil e rápida. ### gyp + + Todo o sistema de build é gerenciado pelo gyp, um gerador de projetos baseado em python que foi copiado do V8. Ele pode gerar arquivos de projeto para uso em sistemas de build de diversas plataformas. O Node.js precisa disso porque @@ -101,6 +155,12 @@ linguagens que requerem compilação, como C++. ### gtest + + O código nativo pode ser testado usando gtest, que foi tirado do Chromium. Ele permite testes de C/C++ sem a necessidade de um executável node existente para dar o bootstrap inicial. From a6dcbe72c0ca2ad33c69ae90ff12d97b5c29294d Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Sun, 16 Jun 2019 14:16:02 -0300 Subject: [PATCH 04/17] Add sponsor --- locale/pt-br/docs/guides/abi-stability.md | 44 ++++++++++++++++++----- 1 file changed, 35 insertions(+), 9 deletions(-) diff --git a/locale/pt-br/docs/guides/abi-stability.md b/locale/pt-br/docs/guides/abi-stability.md index b010c70a7b0e2..4ba6fa1c14df2 100644 --- a/locale/pt-br/docs/guides/abi-stability.md +++ b/locale/pt-br/docs/guides/abi-stability.md @@ -1,25 +1,42 @@ --- -title: ABI Stability +title: Estabilidade ABI layout: docs.hbs --- -# ABI Stability +# Estabilidade ABI -## Introduction -An Application Binary Interface (ABI) is a way for programs to call functions +## Introdução + + + +Uma Interface Binária de Aplicação (IBA, ou *Application Binary Interface (ABI)* em inglês) +é uma forma que programas utilizam para chamar funções e utilizar estruturas de +dados de outros programas compilados. É a versão compilada de uma Interface +de Programação de Aplicações (API). Em outras palavras, os arquivos de +cabeçalho que descrevem as classes, funções, estruturas, enumeradores +e constante que permitem a aplicação performar uma tarefa desejada +correspondem, a nível de compilação, a um conjunto de endereços, valores de +parâmetros esperados, tamanhos de estruturas de memória e layouts com os quais +o provedor da ABI foi compilado. -The application using the ABI must be compiled such that the available + -Since the provider of the ABI and the user of the ABI may be compiled at +A aplicação que está usando o ABI deve ser compilada de tal maneira que +os endereços disponíveis, valores esperados de parâmetros, tamanhos de +estruturas de memória e layouts concordem com aqueles com os quais o +provedor da ABI foi compilado. Isto é normalmente feito compilando +a aplicação utilizando os headers providos pelo provedor da ABI. + + + +Já que o provedor da ABI e o usuário da ABI podem ser compilados em tempos +diferentes e com versões diferentes do compilador, uma porção da responsabilidade +de garantir a compatibilidade da ABI está no compilador. Diferentes versões +do compilador, talvez providas por diferentes fornecedores, devem todas +produzir a mesma ABI a partir de um arquivo de cabeçalho com um conteúdo +determinado, e devem produzir código para a aplicação utilizando a ABI +que acessa a API descrita em um dado cabeçalho de acordo com as convenções +da ABI. Resultando The remaining responsibility for ensuring ABI compatibility lies with the team maintaining the header files which provide the API that results, upon From 7a4118b1288d68a99d373173cdc2d8361f945afd Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Sun, 16 Jun 2019 14:16:02 -0300 Subject: [PATCH 05/17] Begin ABI stability --- locale/pt-br/docs/guides/abi-stability.md | 44 ++++++++++++++++++----- 1 file changed, 35 insertions(+), 9 deletions(-) diff --git a/locale/pt-br/docs/guides/abi-stability.md b/locale/pt-br/docs/guides/abi-stability.md index b010c70a7b0e2..4ba6fa1c14df2 100644 --- a/locale/pt-br/docs/guides/abi-stability.md +++ b/locale/pt-br/docs/guides/abi-stability.md @@ -1,25 +1,42 @@ --- -title: ABI Stability +title: Estabilidade ABI layout: docs.hbs --- -# ABI Stability +# Estabilidade ABI -## Introduction -An Application Binary Interface (ABI) is a way for programs to call functions +## Introdução + + + +Uma Interface Binária de Aplicação (IBA, ou *Application Binary Interface (ABI)* em inglês) +é uma forma que programas utilizam para chamar funções e utilizar estruturas de +dados de outros programas compilados. É a versão compilada de uma Interface +de Programação de Aplicações (API). Em outras palavras, os arquivos de +cabeçalho que descrevem as classes, funções, estruturas, enumeradores +e constante que permitem a aplicação performar uma tarefa desejada +correspondem, a nível de compilação, a um conjunto de endereços, valores de +parâmetros esperados, tamanhos de estruturas de memória e layouts com os quais +o provedor da ABI foi compilado. -The application using the ABI must be compiled such that the available + -Since the provider of the ABI and the user of the ABI may be compiled at +A aplicação que está usando o ABI deve ser compilada de tal maneira que +os endereços disponíveis, valores esperados de parâmetros, tamanhos de +estruturas de memória e layouts concordem com aqueles com os quais o +provedor da ABI foi compilado. Isto é normalmente feito compilando +a aplicação utilizando os headers providos pelo provedor da ABI. + + + +Já que o provedor da ABI e o usuário da ABI podem ser compilados em tempos +diferentes e com versões diferentes do compilador, uma porção da responsabilidade +de garantir a compatibilidade da ABI está no compilador. Diferentes versões +do compilador, talvez providas por diferentes fornecedores, devem todas +produzir a mesma ABI a partir de um arquivo de cabeçalho com um conteúdo +determinado, e devem produzir código para a aplicação utilizando a ABI +que acessa a API descrita em um dado cabeçalho de acordo com as convenções +da ABI. Resultando The remaining responsibility for ensuring ABI compatibility lies with the team maintaining the header files which provide the API that results, upon From 7f31f2fd492175da5bf077b246ae939a2a4576d6 Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Sun, 16 Jun 2019 17:05:32 -0300 Subject: [PATCH 06/17] Finish ABI stability translation --- locale/pt-br/docs/guides/abi-stability.md | 136 +++++++++++++++++----- 1 file changed, 106 insertions(+), 30 deletions(-) diff --git a/locale/pt-br/docs/guides/abi-stability.md b/locale/pt-br/docs/guides/abi-stability.md index 4ba6fa1c14df2..3510ade6562d3 100644 --- a/locale/pt-br/docs/guides/abi-stability.md +++ b/locale/pt-br/docs/guides/abi-stability.md @@ -14,7 +14,6 @@ describing the classes, functions, data structures, enumerations, and constants which enable an application to perform a desired task correspond by way of compilation to a set of addresses and expected parameter values and memory structure sizes and layouts with which the provider of the ABI was compiled. --> - Uma Interface Binária de Aplicação (IBA, ou *Application Binary Interface (ABI)* em inglês) é uma forma que programas utilizam para chamar funções e utilizar estruturas de dados de outros programas compilados. É a versão compilada de uma Interface @@ -29,7 +28,6 @@ o provedor da ABI foi compilado. addresses, expected parameter values, and memory structure sizes and layouts agree with those with which the ABI provider was compiled. This is usually accomplished by compiling against the headers provided by the ABI provider. --> - A aplicação que está usando o ABI deve ser compilada de tal maneira que os endereços disponíveis, valores esperados de parâmetros, tamanhos de estruturas de memória e layouts concordem com aqueles com os quais o @@ -45,7 +43,6 @@ code for the application using the ABI that accesses the API described in a given header according to the conventions of the ABI resulting from the description in the header. Modern compilers have a fairly good track record of not breaking the ABI compatibility of the applications they compile. --> - Já que o provedor da ABI e o usuário da ABI podem ser compilados em tempos diferentes e com versões diferentes do compilador, uma porção da responsabilidade de garantir a compatibilidade da ABI está no compilador. Diferentes versões @@ -53,17 +50,26 @@ do compilador, talvez providas por diferentes fornecedores, devem todas produzir a mesma ABI a partir de um arquivo de cabeçalho com um conteúdo determinado, e devem produzir código para a aplicação utilizando a ABI que acessa a API descrita em um dado cabeçalho de acordo com as convenções -da ABI. Resultando +da ABI resultantes das descrições no cabeçalho. Compiladores modernos tem um histórico +relativamente bom em não quebrar esta compatibilidade nas aplicações que +compilam. -The remaining responsibility for ensuring ABI compatibility lies with the team + +O resto da responsabilidade por garantir a compatibilidade da API está no +time que mantém os arquivos de cabeçalho que criam a API que, após compilada, +resulta na ABI que deve permanecer estável. Mudanças nesses arquivos de +cabeçalho podem ser feitas, porém a natureza destas mudanças deve ser +acompanhada de perto para garantir que, após a compilação, a ABI não mude +de forma que usuários existentes percam a compatibilidade com a nova versão. -## ABI Stability in Node.js -Node.js provides header files maintained by several independent teams. For +## Estabilidade da ABI no Node.js + + +O Node.js possui diversos arquivos de cabeçalhos que são mantidos por diversos +times independentes, por exemplo, cabeçalhos como `node.h` e `node_buffer.h` são +mantidos pela equipe do Node.js. `v8.h` é mantido pela equipe do V8 que, mesmo +sendo muito próxima da equipe do Node.js, é ainda sim independente e possui suas +próprias agendas e prioridades. Portanto, o time do Node.js só possui controle +parcial sobre as mudanças que são introduzidas nos cabeçalhos que o projeto possui. +Como resultado, o projeto do Node.js adotou o [semantic versioning](https://semver.org/). +Isto garante que as APIs providas pelo projeto vão resultar em uma ABI estável +para todas as versões minor e patch do Node.js lançadas dentro de uma major. +Na prática, isso significa que o projeto como um todo se comprometeu a garantir +que um módulo nativo do Node.js que for compilado contra uma versão major do Node.js +vai carregar com sucesso em todas as versões minor ou patch dentro desta versão +major sobre a qual o addon foi compilado. ## N-API -Demand has arisen for equipping Node.js with an API that results in an ABI that + + +Uma demanda para equipar o Node.js com uma API que resulta em uma ABI que permanece +estável dentre multiplas versões major do Node.js acabou surgindo. A motivação para +criar tal API são as seguintes: + + +* A linguagem JavaScript permaneceu compatível com ela mesma desde o seus +primeiros dias, enquanto a ABI do engine que executa o código JavaScript muda +com cada versão major do Node.js. Isso significa que aplicações que consistem +de pacotes do Node.js que são completamente escritos em JavaScript não precisam +ser recompilados, reinstalados ou sofrer um novo deploy uma vez que uma nova +versão major do Node.js é instalada no ambiente de produção onde tal aplicação +está sendo executada. Em contraste a isso, se uma aplicação depende de m pacote +que contém um módulo nativo, então a aplicação precisa ser recompilada, reinstalada +e reexecutada sempre que uma nova versão major do Node.js é introduzida em seu ambiente +de produção. Essa disparidade entre os pacotes que contém addons nativos e os que são +escritos com JavaScript em sua totalidade acabou por adicionar um peso a mais na +manutenção em sistemas que estão em produção e dependem de addons nativos. + + +* Outros projetos começaram a produzir interfaces JavaScript que são, essencialmente, +alternativas às implementações do Node.js. Uma vez que estes projetos são, geralmente, +criados e construídos em um engine JavaScript diferente do V8, seus addons nativos +necessariamente tem uma estrutura diferente e usam uma API diferente. Mesmo assim, +utilizar uma única API para um módulo nativo entre diferentes implementações da +API JavaScript do Node.js permitiria que estes projetos tirassem vantagem do +ecossistema de pacotes JavaScript que já se acumulou ao redor do Node.js. + + +* O Node.js pode mudar para utilizar um engine JavaScript diferente do V8 no futuro. +Isto significa que, externamente, todas as interfaces do Node.js continuariam iguais, +porém o cabeçalho do V8 não existiria. Tal alteração causaria uma disrupção do +ecossistema do Node.js no geral, e também do ecossistema de addons nativos em particular, +se a API que é agnóstica do engine JavaScript que está sendo utilizado não for provida +pelo Node.js e adotada pelos addons nativos. + + +Para estas finalidades o Node.js introduziu a N-API na versão 8.6.0 e a marcou como +um componente estável do projeto na versão 8.12.0. A API é definida pelos headers +[`node_api.h`][] e [`node_api_types.h`][], e provê uma garantia de compatibilidade +com versões posteriores que ultrapassam a limitação da versão major do Node.js. +Esta garantia pode ser descrita como o seguinte: + + +**Uma versão *n* da N-API estará disponível na versão major do Node.js na qual +ela foi primeiramente publicada, e em todas as versões subsequentes do Node.js, +incluindo versões major.** -A native addon author can take advantage of the N-API forward compatibility + +Um autor de um módulo nativo pode tirar proveito desta garantia de compatibilidade +da N-API para fazer com que seu módulo só utilize as APIs dispostas no `node_api.h` +e as estruturas de dados e constantes definidas em `node_api_types.h`. Fazendo isto, +o autor facilita a adoção do seu módulo indicando para seus usuários que este módulo +não acarretará em uma reinstalação ou um trabalho extra de manutenção se for instalado. +De forma que este módulo se comportaria da mesma forma que um pacote escrito completamente +em JavaScript. + + +A N-API é versionada de porque novas APIs são adicionadas de tempos em tempos. +Diferentemente do semantic versioning, as versões do N-API são cumulativas. Isto é, +cada versão da N-API tem o mesmo significado de uma versão minor no sistema semver, +significando que todas as mudanças feitas na N-API são retrocompatíveis. Além disso, +novas N-APIs são adicionadas sob uma flag experimental para dar à comunidade a chance +de serem desligadas em ambientes produtivos. O status experimental significa que, mesmo +que todo o cuidado tenha sido tomado para garantir que a nova API não foi modificada de +forma que se tornasse incompatível com a ABI no futuro, ela ainda não foi suficientemente +testada em produção para ser dada como correta e útil e, por conta disso, pode sofrer +mudanças incompatíveis com a ABI antes de ser finalmente incorporada dentro da próxima +versão da N-API. Isto é, uma versão experimental da N-API ainda não está coberta pela +garantia de compatilibilidade com versões posteriores que comentamos anteriormente. [`node_api.h`]: https://github.com/nodejs/node/blob/master/src/node_api.h [`node_api_types.h`]: https://github.com/nodejs/node/blob/master/src/node_api_types.h From 4dc263d556c238248d60ec66804f43a86573bd9e Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Fri, 21 Jun 2019 10:25:36 -0300 Subject: [PATCH 07/17] Fix Typo Co-Authored-By: Tiago Danin --- locale/pt-br/docs/guides/abi-stability.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/locale/pt-br/docs/guides/abi-stability.md b/locale/pt-br/docs/guides/abi-stability.md index 3510ade6562d3..bf06f75bb3622 100644 --- a/locale/pt-br/docs/guides/abi-stability.md +++ b/locale/pt-br/docs/guides/abi-stability.md @@ -102,7 +102,7 @@ major sobre a qual o addon foi compilado. remains stable across multiple Node.js major versions. The motivation for creating such an API is as follows: --> Uma demanda para equipar o Node.js com uma API que resulta em uma ABI que permanece -estável dentre multiplas versões major do Node.js acabou surgindo. A motivação para +estável dentre múltiplas versões major do Node.js acabou surgindo. A motivação para criar tal API são as seguintes: +# Visão geral sobre operações bloqueantes e não-bloqueantes -This overview covers the difference between **blocking** and **non-blocking** + +Esta visão geral cobre as **diferenças** entre chamadas **bloqueantes** e **não-bloqueantes** no Node.js. +Vamos nos referir ao event loop e à libuv, mas não é necessário nenhum conhecimento prévio sobre +estes tópicos. É esperado que o leitor tenha um conhecimento básico de [padrões de callback](https://nodejs.org/en/knowledge/getting-started/control-flow/what-are-callbacks/) no Javascript e Node.js. +> "I/O" se refere, principalmente, à interação com o disco do sistema +> e a rede suportada pela [libuv](http://libuv.org). -## Blocking + +## Chamadas bloqueantes -**Blocking** is when the execution of additional JavaScript in the Node.js + +Ser **bloqueante** é quando a execução do código do resto do código Javascript no processo +do Node.js precisa esperar até que uma operação não-Javascript seja completada. Isso acontece +porque o event loop é incapaz de continuar executando Javascript enquanto uma operação +**bloqueante** está sendo executada. -In Node.js, JavaScript that exhibits poor performance due to being CPU intensive + +No Node.js, Javascript que mostra uma performance ruim devido ao fato de que é um +processo que usa CPU intensivamente ao invés de esperar uma operação não-Javascript, +como I/O, não é geralmente identificada como uma operação **bloqueante**. Métodos +síncronos na biblioteca padrão do Node.js que usam a libuv são as operações **bloqueantes** +mais utilizadas. Módulos nativos também podem conter métodos **bloqueantes**. + + +Todos os métodos I/O na biblioteca padrão do Node.js tem uma versão assíncrona, +que, por definição, são **não-bloqueantes**, e aceitam funções de callback. Alguns métodos +também tem suas versões **bloqueantes**, que possuem o sufixo `Sync` no nome. -## Comparing Code + +## Comparando códigos -**Blocking** methods execute **synchronously** and **non-blocking** methods + +Métodos **bloqueantes** executam de forma **síncrona** e métodos **não-bloqueantes** +executam de forma **assíncrona**. ```js const fs = require('fs'); -const data = fs.readFileSync('/file.md'); // blocks here until file is read +const data = fs.readFileSync('/file.md'); // a execução é bloqueada aqui até o arquivo ser lido ``` -And here is an equivalent **asynchronous** example: + +E aqui temos um exemplo equivalente usando um método **assíncrono**: ```js const fs = require('fs'); From 10733f56a1bb713a388d5ff5858001e38d90e1d6 Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Tue, 2 Jul 2019 23:29:23 -0300 Subject: [PATCH 10/17] Finish blocking vs non blocking --- .../docs/guides/blocking-vs-non-blocking.md | 88 +++++++++++++------ 1 file changed, 61 insertions(+), 27 deletions(-) diff --git a/locale/pt-br/docs/guides/blocking-vs-non-blocking.md b/locale/pt-br/docs/guides/blocking-vs-non-blocking.md index 43c168db176f7..30889bdf15229 100644 --- a/locale/pt-br/docs/guides/blocking-vs-non-blocking.md +++ b/locale/pt-br/docs/guides/blocking-vs-non-blocking.md @@ -76,23 +76,30 @@ fs.readFile('/file.md', (err, data) => { }); ``` -The first example appears simpler than the second but has the disadvantage of + +O primeiro exemplo parece mais simples do que o segundo, mas ele possui o contra +de que, na segunda linha, temos um código **bloqueando** a execução de qualquer +JavaScript adicional até que todo o arquivo seja lido. Note que, na versão síncrona, +qualquer erro que houver na aplicação vai precisar ser tratado ou então o processo +vai sofrer um crash. Na versão assíncrona, é da decisão do programador se quer ou +não tratar os erros. ```js const fs = require('fs'); -const data = fs.readFileSync('/file.md'); // blocks here until file is read +const data = fs.readFileSync('/file.md'); // trava aqui até o arquivo ser lido console.log(data); -moreWork(); // will run after console.log +maisProcessamento(); // roda depois de console.log ``` -And here is a similar, but not equivalent asynchronous example: + +Um exemplo similar, mas não equivalente, no formato assíncrono: ```js const fs = require('fs'); @@ -100,39 +107,60 @@ fs.readFile('/file.md', (err, data) => { if (err) throw err; console.log(data); }); -moreWork(); // will run before console.log +maisProcessamento(); // vai rodar antes do console.log ``` -In the first example above, `console.log` will be called before `moreWork()`. In + +No primeiro exemplo acima, `console.log` vai ser chamado antes de `maisProcessamento()`. +No segundo exemplo, `fs.readFile()` é uma operação **não-bloqueante**, então a execução +de código JavaScript vai continuar e o método `maisProcessamento()` vai ser chamado +primeiro. A habilidade de executar `maisProcessamento()` sem ter de esperar o arquivo +ser completamente lido é um conceito chave de design que permite uma melhor escalabilidade +através de mais rendimento. -## Concurrency and Throughput +## Concorrência e Rendimento -JavaScript execution in Node.js is single threaded, so concurrency refers to the + +A execução do JavaScript no Node.js é single threaded. Então a concorrência é +referente somente à capacidade do event loop de executar funções de callback +depois de completar qualquer outro processamento. Qualquer código que pode +rodar de maneira concorrente deve permitir que o event loop continue executando +enquanto uma operação não-JavaScript, como I/O, está sendo executada. + + +Como um exemplo, vamos considerar o caso onde cada requisição de um servidor web +leva 50ms para ser completada e 45ms desses 50ms é I/O de banco de dados que pode +ser realizado de forma assíncrona. Escolhendo uma abordagen **não-bloqueante** +vamos liberar esses 45ms por request para que seja possível lidar com outras +requests. Isso é uma diferença bastante significante em capacidade só porque +decidimos utilizar um método **não-bloqueante** ao invés de sua variante +**bloqueante**. + + +O event loop é diferente de outros modelos em muitas outras linguagens onde threads +adicionais podem ser criadas para lidar com processamento concorrente. + +## Perigos de misturar códigos bloqueantes e não-bloqueantes + + +Existem alguns padrões que devem ser evitados quando lidamos com I/O. Vamos ver um +exemplo: ```js const fs = require('fs'); @@ -143,10 +171,14 @@ fs.readFile('/file.md', (err, data) => { fs.unlinkSync('/file.md'); ``` -In the above example, `fs.unlinkSync()` is likely to be run before + +No exemplo acima, `fs.unlinkSync()` provavelmente vai rodar antes de `fs.readFile()`, +o que deletaria o arquivo `file.md` antes de que ele possa ser, de fato, lido. Uma forma +melhor de escrever esse código, de forma completamente **não-bloqueante** e garantida de +executar na ordem correta seria: ```js @@ -160,8 +192,10 @@ fs.readFile('/file.md', (readFileErr, data) => { }); ``` -The above places a **non-blocking** call to `fs.unlink()` within the callback of -`fs.readFile()` which guarantees the correct order of operations. + +O exemplo acima coloca uma chamada **não-bloqueante** a `fs.unlink()` dentro do callback +de `fs.readFile()`, o que garante a ordem correta das operações. ## Additional Resources From c64738b4e989404bec3075f058d329dabfdf0796 Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Tue, 2 Jul 2019 23:30:51 -0300 Subject: [PATCH 11/17] Begin diagnostics --- .../docs/guides/diagnostics-flamegraph.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/locale/pt-br/docs/guides/diagnostics-flamegraph.md b/locale/pt-br/docs/guides/diagnostics-flamegraph.md index 159e7d028a070..5918e25c7098c 100644 --- a/locale/pt-br/docs/guides/diagnostics-flamegraph.md +++ b/locale/pt-br/docs/guides/diagnostics-flamegraph.md @@ -5,11 +5,11 @@ layout: docs.hbs # Flame Graphs -## What's a flame graph useful for? +## Para que serve um Flame Graph? Flame graphs are a way of visualizing CPU time spent in functions. They can help you pin down where you spend too much time doing synchronous operations. -## How to create a flame graph +## How to create a flame graph You might have heard creating a flame graph for Node.js is difficult, but that's not true (anymore). Solaris vms are no longer needed for flame graphs! @@ -35,9 +35,9 @@ Now let's get to work. 3. run node with perf enabled (see [perf output issues](#perf-output-issues) for tips specific to Node.js versions) ```bash perf record -e cycles:u -g -- node --perf-basic-prof app.js -``` +``` 4. disregard warnings unless they're saying you can't run perf due to missing packages; you may get some warnings about not being able to access kernel module samples which you're not after anyway. -5. Run `perf script > perfs.out` to generate the data file you'll visualize in a moment. It's useful to [apply some cleanup](#filtering-out-node-internal-functions) for a more readable graph +5. Run `perf script > perfs.out` to generate the data file you'll visualize in a moment. It's useful to [apply some cleanup](#filtering-out-node-internal-functions) for a more readable graph 6. install stackvis if not yet installed `npm i -g stackvis` 7. run `stackvis perf < perfs.out > flamegraph.htm` @@ -53,10 +53,10 @@ This is great for recording flame graph data from an already running process tha perf record -F99 -p `pgrep -n node` -g -- sleep 3 ``` -Wait, what is that `sleep 3` for? It's there to keep the perf running - despite `-p` option pointing to a different pid, the command needs to be executed on a process and end with it. +Wait, what is that `sleep 3` for? It's there to keep the perf running - despite `-p` option pointing to a different pid, the command needs to be executed on a process and end with it. perf runs for the life of the command you pass to it, whether or not you're actually profiling that command. `sleep 3` ensures that perf runs for 3 seconds. -Why is `-F` (profiling frequency) set to 99? It's a reasonable default. You can adjust if you want. +Why is `-F` (profiling frequency) set to 99? It's a reasonable default. You can adjust if you want. `-F99` tells perf to take 99 samples per second, for more precision increase the value. Lower values should produce less output with less precise results. Precision you need depends on how long your CPU intensive functions really run. If you're looking for the reason of a noticeable slowdown, 99 frames per second should be more than enough. After you get that 3 second perf record, proceed with generating the flame graph with the last two steps from above. @@ -88,13 +88,13 @@ Well, without these options you'll still get a flame graph, but with most bars l ### Node.js 8.x V8 pipeline changes -Node.js 8.x and above ships with new optimizations to JavaScript compilation pipeline in V8 engine which makes function names/references unreachable for perf sometimes. (It's called Turbofan) +Node.js 8.x and above ships with new optimizations to JavaScript compilation pipeline in V8 engine which makes function names/references unreachable for perf sometimes. (It's called Turbofan) -The result is you might not get your function names right in the flame graph. +The result is you might not get your function names right in the flame graph. You'll notice `ByteCodeHandler:` where you'd expect function names. -[0x](https://www.npmjs.com/package/0x) has some mitigations for that built in. +[0x](https://www.npmjs.com/package/0x) has some mitigations for that built in. For details see: - https://github.com/nodejs/benchmarking/issues/168 From 3a47c45f8bb92e1eaafacdcf43cc702899e9f34e Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Sun, 7 Jul 2019 22:41:38 -0300 Subject: [PATCH 12/17] Start debugging --- .../docs/guides/debugging-getting-started.md | 96 +++++++++++++------ 1 file changed, 68 insertions(+), 28 deletions(-) diff --git a/locale/pt-br/docs/guides/debugging-getting-started.md b/locale/pt-br/docs/guides/debugging-getting-started.md index 0f29680104c2f..73cc7b94b3988 100644 --- a/locale/pt-br/docs/guides/debugging-getting-started.md +++ b/locale/pt-br/docs/guides/debugging-getting-started.md @@ -3,67 +3,107 @@ title: Debugging - Getting Started layout: docs.hbs --- -# Debugging Guide +# Guia de debugging -This guide will help you get started debugging your Node.js apps and scripts. + +Este guia vai te ajudar a começar a debugar suas aplicações Node.js. -## Enable Inspector +## Ative o inspetor -When started with the `--inspect` switch, a Node.js process listens for a + -Inspector clients must know and specify host address, port, and UUID to connect. +Quando uma aplicação Node.js for iniciada com a flag `--inspect`, o processo irá +esperar por um client de debugging. Por padrão, ele vai ouvir no host e porta +locais `127.0.0.1:9229`. Cada processo também possuirá um [UUID][] único. + + +Clients do inspector devem saber e especificar o endereço do host, porta e UUID +para se conectarem. Uma URL completa é mais ou menos assim: +`ws://127.0.0.1:9229/0f2c936f-b1cd-4ac9-aab3-f63b0f33d55e` -Node.js will also start listening for debugging messages if it receives a + +O Node.js também irá ouvir mensagens de debugging se o processo receber um sinal do tipo +`SIGUSR1` (o `SIGUSR1` não está disponível no Windows). No Node.js 7 e anteriores, isto +ativa a API legada de debugging. Nas versões 8 para frente, isto vai ativar a API de +inspeção. --- -## Security Implications +## Implicações de segurança -Since the debugger has full access to the Node.js execution environment, a + +Uma vez que o debugger tem total acesso ao ambiente de execução do Node. Um ator +malicioso que tiver acesso a conexão por esta porta pode executar um código qualquer +em nome do processo que está sendo invadido. É importante notar e entender as implicações +de se expor a porta de debugging em redes públicas ou privadas. -### Exposing the debug port publicly is unsafe +### Expor a porta de debugging publicamente é inseguro -If the debugger is bound to a public IP address, or to 0.0.0.0, any clients that + +Se o debugger está conectado a um endereço de IP público, ou 0.0.0.0, qualquer client +que puder chegar neste endereço vai poder se conectar a ele sem nenhuma restrição e +vai ser capaz de rodar qualquer código. -By default `node --inspect` binds to 127.0.0.1. You explicitly need to provide a + +Por padrão `node --inspect` se liga a `127.0.0.1`. Você precisa explicitamente dar um +endereço de IP ou 0.0.0.0, etc. Se você deseja permitir conexões externas. Fazer isto +pode expor sua aplicação a uma falha de segurança potencialmente significante. Nós +sugerimos que você garante que todos os firewalls e controles de acesso existam e estejam +configurados de acordo para prevenir tal exposição. -See the section on '[Enabling remote debugging scenarios](#enabling-remote-debugging-scenarios)' on some advice on how -to safely allow remote debugger clients to connect. + + +Veja a seção sobre '[Ativando cenários de debugging remoto](#enabling-remote-debugging-scenarios)' para dicas de como permitir +de forma segura que outros clients se conectem. -### Local applications have full access to the inspector +### Aplicações locais tem acesso total ao inspetor -Even if you bind the inspector port to 127.0.0.1 (the default), any applications + +Mesmo que você conecte a porta do inspetor a 127.0.0.1 (o padrão), qualquer aplicação +que rode localmente na sua máquina vai ter acesso sem restrições ao mesmo. Isto foi +desenhado para ser assim para permitir que debuggers locais possam se conectar de forma +mais simples. -### Browsers, WebSockets and same-origin policy +### Browsers, websockets e políticas de mesma origem -Websites open in a web-browser can make WebSocket and HTTP requests under the + +Sites abertos em um navegador podem fazer requisições via websockets e HTTP +desde que estejam dentro do modelo de segurança do browser. Uma conexão HTTP inicial +é necessária para obter um ID único para uma sessão de debugging. Para mais segurança +contra [ataques de rebinding de DNS](https://en.wikipedia.org/wiki/DNS_rebinding), +o Node.js verifica se o header `Host` para a conexão especificam ou um endereço de IP +que seja exatamente `localhost` ou `localhost6`. + -These security policies disallow connecting to a remote debug server by + +Estas políticas de segurança não permitem a conexão a um servidor de debug remoto +somente especificando o hostname. Você pode contornar essa restrição especificando +ou um IP ou usando um túnel SSH como descrito abaixo. ## Inspector Clients @@ -84,7 +124,7 @@ info on these follows: are listed. * **Option 2**: Copy the `devtoolsFrontendUrl` from the output of `/json/list` (see above) or the --inspect hint text and paste into Chrome. -* **Option 3**: Install the Chrome Extension NIM (Node Inspector Manager): +* **Option 3**: Install the Chrome Extension NIM (Node Inspector Manager): https://chrome.google.com/webstore/detail/nim-node-inspector-manage/gnhhdgbaldcilmgcpfddgdbkhjohddkj #### [Visual Studio Code](https://github.com/microsoft/vscode) 1.10+ From d397b266ec3e4458c48c88e027a17d532a8dc4cf Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Tue, 16 Jul 2019 20:55:06 -0300 Subject: [PATCH 13/17] Finish debugging --- .../docs/guides/debugging-getting-started.md | 157 +++++++++++------- 1 file changed, 99 insertions(+), 58 deletions(-) diff --git a/locale/pt-br/docs/guides/debugging-getting-started.md b/locale/pt-br/docs/guides/debugging-getting-started.md index 73cc7b94b3988..5ee312dc9f5da 100644 --- a/locale/pt-br/docs/guides/debugging-getting-started.md +++ b/locale/pt-br/docs/guides/debugging-getting-started.md @@ -97,7 +97,6 @@ contra [ataques de rebinding de DNS](https://en.wikipedia.org/wiki/DNS_rebinding o Node.js verifica se o header `Host` para a conexão especificam ou um endereço de IP que seja exatamente `localhost` ou `localhost6`. - @@ -105,76 +104,93 @@ Estas políticas de segurança não permitem a conexão a um servidor de debug r somente especificando o hostname. Você pode contornar essa restrição especificando ou um IP ou usando um túnel SSH como descrito abaixo. -## Inspector Clients +## Clients de Inspetores -Several commercial and open source tools can connect to Node's Inspector. Basic + +Muitas ferramentas comerciais e open source podem se conectar ao inspetor do Node. Aqui +estão as informações básicas sobre eles: #### [node-inspect](https://github.com/nodejs/node-inspect) -* CLI Debugger supported by the Node.js Foundation which uses the [Inspector Protocol][]. + +* Um debugger de linha de comando que é mantido pela Node.js Foundation, utiliza o [Protocolo de Inspeção][] +* A última versão pode ser instalada de forma independente (usando `npm install -g node-inspect`) e utilizada com `node-inspect script.js` #### [Chrome DevTools](https://github.com/ChromeDevTools/devtools-frontend) 55+ -* **Option 1**: Open `chrome://inspect` in a Chromium-based + +* **Opção 1**: Abra uma nova aba em `chrome://inspect` em qualquer navegador baseado no Chromium. Clique no botão `configure` e tenha certeza que sua porta e host estão listados +* **Opção 2**: Copie o `devtoolsFrontendUrl` da saída do `/json/list` (veja acima) ou da flag --inspect e cole no Chrome +* **Opção 3**: Instale a extensão NIM (Node Inspector Manager): https://chrome.google.com/webstore/detail/nim-node-inspector-manage/gnhhdgbaldcilmgcpfddgdbkhjohddkj #### [Visual Studio Code](https://github.com/microsoft/vscode) 1.10+ -* In the Debug panel, click the settings icon to open `.vscode/launch.json`. - Select "Node.js" for initial setup. + +* No painel "Debug", clique no icone de configurações para abrir `./vscode/launch.json` + Seleciona "Node.js" para o setup inicial #### [Visual Studio](https://github.com/Microsoft/nodejstools) 2017 -* Choose "Debug > Start Debugging" from the menu or hit F5. -* [Detailed instructions](https://github.com/Microsoft/nodejstools/wiki/Debugging). + +* Escolha "Debug > Start Debugging" no menu ou aperte F5 +* [Mais detalhes](https://github.com/Microsoft/nodejstools/wiki/Debugging). -#### [JetBrains WebStorm](https://www.jetbrains.com/webstorm/) 2017.1+ and other JetBrains IDEs +#### [JetBrains WebStorm](https://www.jetbrains.com/webstorm/) 2017.1+ e outros IDEs da JetBrains -* Create a new Node.js debug configuration and hit Debug. `--inspect` will be used + +* Crie uma nova configuraçõ de debug para Node.js e aperte o botão "Debug". A flag `--inspect` será usada + por padrão para o Node.js 7 ou superior. Para desativar esse comportamento, desmarque `js.debugger.node.use.inspect` no registro da IDE. #### [chrome-remote-interface](https://github.com/cyrus-and/chrome-remote-interface) -* Library to ease connections to Inspector Protocol endpoints. + +* Biblioteca para facilitar a conexão nos protocolos de inspeção #### [Gitpod](https://www.gitpod.io) -* Start a Node.js debug configuration from the `Debug` view or hit `F5`. [Detailed instructions](https://medium.com/gitpod/debugging-node-js-applications-in-theia-76c94c76f0a1) + +* Crie uma nova configuração de debug para Node.js a partir da view `Debug` ou aperte `F5`. [Mais instruções aqui](https://medium.com/gitpod/debugging-node-js-applications-in-theia-76c94c76f0a1) --- -## Command-line options +## Opções de linha de comando -The following table lists the impact of various runtime flags on debugging: + +Abaixo temos a lista de todas as flags que impactam a linha de comando enquanto em debugging: - + - + @@ -182,9 +198,9 @@ The following table lists the impact of various runtime flags on debugging: @@ -192,10 +208,10 @@ The following table lists the impact of various runtime flags on debugging: @@ -203,8 +219,7 @@ The following table lists the impact of various runtime flags on debugging: @@ -212,9 +227,8 @@ The following table lists the impact of various runtime flags on debugging: @@ -222,63 +236,90 @@ The following table lists the impact of various runtime flags on debugging: --- -## Enabling remote debugging scenarios +## Ativando cenários de debugging remoto -We recommend that you never have the debugger listen on a public IP address. If + +Nós recomendamos que você nunca faça com que o debugger ouça um IP público. Se você +precisar permitir conexões de debug remotas, nós recomendamos que use um túnel SSH. +Os exemplos a seguir são apenas ilustrativos. Por favor entenda que existe um risco +grande de segurança ao permitir acesso a um serviço privilegiado antes de continuar. -Let's say you are running Node on remote machine, remote.example.com, that you + +Digamos que você esteja executando o Node em uma máquina remota, com o endereço `remoto.exemplo.com`, que +você quer ser capaz de debugar. Nesta máquina, você deve iniciar o processo do node com o inspetor ouvindo +somente o `localhost` (o padrão) ```bash $ node --inspect server.js ``` -Now, on your local machine from where you want to initiate a debug client -connection, you can setup an ssh tunnel: + +Agora, na sua máquina local, de onde você quer iniciar uma conexão de debug, +crie um tunel SSH: ```bash -$ ssh -L 9221:localhost:9229 user@remote.example.com +$ ssh -L 9221:localhost:9229 user@remoto.exemplo.com ``` -This starts a ssh tunnel session where a connection to port 9221 on your local + +Isso inicia um tunel SSH onde a conexão para a porta 9221 na sua máquina local vai ser +direcionada para a porta 9229 no servidor remoto.exemplo.com. Agora você pode anexar +um debugger, como o Chrome DevTools ou o Visual Studio Code, ao `localhost:9221`, +que deve ser capaz de debugar como se a aplicação estivesse sendo executada localmente. --- -## Legacy Debugger +## Debugger legado -**The legacy debugger has been deprecated as of Node 7.7.0. Please use --inspect -and Inspector instead.** + +**O debugger legado foi depreciado na versão 7.7.0 do Node. Por favor utilize --inspect e +o inspetor ao invés dele** -When started with the **--debug** or **--debug-brk** switches in version 7 and + +Quando iniciado com a flag **--debug** ou **--debug-brk** na versão 7 e anteriores, +o Node.js começa a ouvir por comandos de debug definidos pelo protocolo de debug do V8, +que já foi descontinuado, em uma porta TCP que, por padrão, é a `5858`. Qualquer client de debugging +que conversa com esse protocolo pode conectar a ele e debugar um processo sendo executado; abaixo temos +alguns dos mais populares. -#### [Built-in Debugger](https://nodejs.org/dist/latest-v6.x/docs/api/debugger.html) +#### [Debugger nativo](https://nodejs.org/dist/latest-v6.x/docs/api/debugger.html) -Start `node debug script_name.js` to start your script under Node's builtin + +Rode como `node debug script.js` para iniciar seu script através do debugger nativo +de linha de comando. Seu script vai ser iniciado em um outro processo do node +que vai ser rodado com a flag `--debug-brk`, e o processo inicial do Node vai executar +o script `_debugger.js` e conectar à sua aplicação. #### [node-inspector](https://github.com/node-inspector/node-inspector) -Debug your Node.js app with Chrome DevTools by using an intermediary process + +Utiliza o Chrome DevTools para debugar sua aplicação Node.js através de um processo +intermediário que traduz o protocolo de inspeção utilizado no Chromium para a o +protocolo de debug do V8 utilizado no Node.js. -[Inspector Protocol]: https://chromedevtools.github.io/debugger-protocol-viewer/v8/ +[Protocolo de Inspeção]: https://chromedevtools.github.io/debugger-protocol-viewer/v8/ [UUID]: https://tools.ietf.org/html/rfc4122 From ea430bc56a1d40be92995af89d82f193cef38d53 Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Wed, 28 Aug 2019 22:37:24 -0300 Subject: [PATCH 14/17] Translate diag flamegraphs --- .../docs/guides/diagnostics-flamegraph.md | 134 ++++++++++++------ 1 file changed, 89 insertions(+), 45 deletions(-) diff --git a/locale/pt-br/docs/guides/diagnostics-flamegraph.md b/locale/pt-br/docs/guides/diagnostics-flamegraph.md index 5918e25c7098c..7a99b9e8d83d6 100644 --- a/locale/pt-br/docs/guides/diagnostics-flamegraph.md +++ b/locale/pt-br/docs/guides/diagnostics-flamegraph.md @@ -7,30 +7,39 @@ layout: docs.hbs ## Para que serve um Flame Graph? -Flame graphs are a way of visualizing CPU time spent in functions. They can help you pin down where you spend too much time doing synchronous operations. + +Flame graphs são uma forma de visualizar o tempo de CPU gasto em funções. Eles podem ajudar você a identificar onde você pode estar gastando muito tempo fazendo processamento síncrono. -## How to create a flame graph +## Como criar um Flame Graph -You might have heard creating a flame graph for Node.js is difficult, but that's not true (anymore). -Solaris vms are no longer needed for flame graphs! + +Você deve ter ouvido que criar Flame Graphs pro Node.js era complicado, mas isso não é verdade (não mais). +VMs com Solaris não são mais necessárias para criação de Flame Graphs! -Flame graphs are generated from `perf` output, which is not a node-specific tool. While it's the most powerful way to visualize CPU time spent, it may have issues with how JavaScript code is optimized in Node.js 8 and above. See [perf output issues](#perf-output-issues) section below. + +Flame Graphs são gerados a partir da saída do `perf`, que não é uma ferramenta específica do Node. Enquanto ela é a forma mais poderosa de visualizar o tempo gasto em CPU, ela também pode ter problemas com como o código JavaScript é otimizado nas versões superiores à 8 do Node.js. Veja a seção [problemas de saída do perf](#problemas-de-saída-do-perf). -### Use a pre-packaged tool +### Usando uma ferramenta pré-empacotada -If you want a single step that produces a flame graph locally, try [0x](https://www.npmjs.com/package/0x) + +Se você quiser uma ferramenta simples, de um único passo, que produz um Flame Graph rápido localmente, tente o [0x](https://www.npmjs.com/package/0x) -For diagnosing production deployments, read these notes: [0x production servers](https://github.com/davidmarkclements/0x/blob/master/docs/production-servers.md) + +Para diagnosticar problemas em produção, leia estas notas: [0x em servidores de produção](https://github.com/davidmarkclements/0x/blob/master/docs/production-servers.md) -### Create a flame graph with system perf tools +### Criando um Flame Graph com as ferramentas `perf` do sistema -The purpose of this guide is to show steps involved in creating a flame graph and keep you in control of each step. + +O propósito deste guia é mostrar os passos envolvidos em criar um flame graph e te manter no controle de cada parte. -If you want to understand each step better take a look at the sections that follow were we go into more detail. + +Se você quiser entender melhor cada passo, dê uma olhada nas seções abaixo onde temos mais detalhes. -Now let's get to work. + +Vamos começar! -1. Install `perf` (usually available through the linux-tools-common package if not already installed) + +1. Instale o `perf` (geralmente disponível através do pacote `linux-tools-common` se já não tiver instalado) +2. tente rodar o comando `perf` - ele pode reclamar sobre alguns módulos não encontrados do kernel, então instale eles também +3. Execute o Node com o `perf` ativado (veja [problemas de saída do perf](#problemas-de-saída-do-perf) para dicas específicas de versões do Node) +```bash +perf record -e cycles:u -g -- node --perf-basic-prof app.js +``` +4. Ignore os avisos a não ser que eles digam que você não pode rodar o `perf` por conta de pacotes não encontrados; Você pode ter alguns avisos sobre não poder acessar as samples dos módulos do kernel, mas não estamos querendo acessar elas de qualquer forma. +5. Execute `perf script > perfs.out` para gerar o arquivo de dados que já vamos visualizar. É bom [fazer uma limpeza](#filtrando-funções-internas-do-node) para uma saída mais legível +6. Instale o stackvis se não tiver já instalado através de `npm i -g stackvis` +7. Execute `stackvis perf < perfs.out > flamegraph.htm` -Now open the flame graph file in your favorite browser and watch it burn. It's color-coded so you can focus on the most saturated orange bars first. They're likely to represent CPU heavy functions. + +Agora abra o arquivo `htm` no seu browser preferido e veja o resultado. Ele tem um código de cores, de forma que você pode focar nas barras mais alaranjadas primeiro. Elas provavelmente representam funções que possuem alto uso de CPU. -Worth mentioning - if you click an element of a flame graph a zoom-in of its surroundings will get displayed above the graph. + +Vale mencionar que, ao clicar em um elemento do gráfico, um zoom será aplicado nas redondezas do mesmo e exibido no topo do gráfico. -### Using `perf` to sample a running process +### Usando `perf` para visualizar um processo em andamento -This is great for recording flame graph data from an already running process that you don't want to interrupt. Imagine a production process with a hard to reproduce issue. + +Isto é excelente para gravar dados para um flame graph a partir de um processo que já esteja rodando e você não quer interromper, por exemplo, um processo de produção com um problema difícil de reproduzir. ```bash perf record -F99 -p `pgrep -n node` -g -- sleep 3 ``` -Wait, what is that `sleep 3` for? It's there to keep the perf running - despite `-p` option pointing to a different pid, the command needs to be executed on a process and end with it. -perf runs for the life of the command you pass to it, whether or not you're actually profiling that command. `sleep 3` ensures that perf runs for 3 seconds. + +Mas o que é este `sleep 3`? Ele existe somente para manter o `perf` rodando - mesmo com o `-p` apontando para um PID diferente, o comando precisa ser executado com um processo e finalizado com ele. +O perf executa com o mesmo tempo de vida do comando que você passar para ele, esteja você debugando ou não aquele comando. `sleep 3` garante que o perf rode por 3 segundos. -Why is `-F` (profiling frequency) set to 99? It's a reasonable default. You can adjust if you want. -`-F99` tells perf to take 99 samples per second, for more precision increase the value. Lower values should produce less output with less precise results. Precision you need depends on how long your CPU intensive functions really run. If you're looking for the reason of a noticeable slowdown, 99 frames per second should be more than enough. + +Porque o `-F` (frequencia de profiling) está em 99? É um default razoável que pode ser ajustado se você quiser. +O `-F99` diz ao perf para tirar 99 amostras por segundo, para mais precisão, aumente o valor. Valores melhores devem produzir menos saídas com resultados também menos precisos. A precisão que você precisa depende de quanto tempo suas funções que precisam de muita CPU demoram para rodar. Se você está procurando a razão de uma lentidão aparente, 99 frames por segundo devem ser mais do que suficientes. -After you get that 3 second perf record, proceed with generating the flame graph with the last two steps from above. + +Depois de 3 segundos de gravação do perf, gere o flame graph com os últimos dois passos mostrados acima. -### Filtering out Node.js internal functions +### Filtrando funções internas do Node -Usually you just want to look at the performance of your own calls, so filtering out Node.js and V8 internal functions can make the graph much easier to read. You can clean up your perf file with: + +Geralmente só queremos olhar a performance do nosso próprio código, então é muito importante filtrarmos as funções internas do Node e do V8 para que o gráfico fique mais simples de ler. Você pode limpar o arquivo com o seguinte comando: ```bash sed -i \ @@ -72,49 +100,65 @@ sed -i \ perfs.out ``` -If you read your flame graph and it seems odd, as if something is missing in the key function taking up most time, try generating your flame graph without the filters - maybe you got a rare case of an issue with Node.js itself. + +Se você ler o gráfico e ele parecer esquisito, como se houvesse algo faltando na função que está levando mais tempo, tente gerar o gráfico sem o filtro - as vezes você pode ter encontrado um problema com o Node.js em si. -### Node.js's profiling options +### Opções de profiling do Node.js -`--perf-basic-prof-only-functions` and `--perf-basic-prof` are the two that are useful for debugging your JavaScript code. Other options are used for profiling Node.js itself, which is outside the scope of this guide. + +A flag `--perf-basic-prof-only-functions` e `--perf-basic-prof` são duas das opções que são úteis para debugar seu código JavaScript. Outras opções são usadas para debugar o Node.js em si, o que está fora do escopo deste guia. -`--perf-basic-prof-only-functions` produces less output, so it's the option with least overhead. + +`--perf-basic-prof-only-functions` produz menos saídas, então é a opção com menos overhead. -### Why do I need them at all? +### Por que eu preciso disso? -Well, without these options you'll still get a flame graph, but with most bars labeled `v8::Function::Call`. + +Bom, sem essas opções você ainda tem o mesmo flame graph, mas com a maioria das barras com a label `v8::Function::Call` -## `perf` output issues +## Problemas de saída do `perf` -### Node.js 8.x V8 pipeline changes +### Mudanças na pipeline do V8 para o Node.js 8.x -Node.js 8.x and above ships with new optimizations to JavaScript compilation pipeline in V8 engine which makes function names/references unreachable for perf sometimes. (It's called Turbofan) + +As versões 8.x e acima do Node.js possuem novas otimizações para a compilação do JavaScript no V8, que fazem com que as referências/nomes das funções ilegíveis para o perf algumas vezes. (O novo compilado é chamado TurboFan) -The result is you might not get your function names right in the flame graph. + +Por conta disso os nomes das funções podem não estar corretos no gráfico. -You'll notice `ByteCodeHandler:` where you'd expect function names. + +Você vai notar um `ByteCodeHandler:` onde deveria haver um nome de função. -[0x](https://www.npmjs.com/package/0x) has some mitigations for that built in. + +O [0x](https://www.npmjs.com/package/0x) tem algumas mitigações para isso já no próprio sistema. -For details see: + +Para mais detalhes veja: - https://github.com/nodejs/benchmarking/issues/168 - https://github.com/nodejs/diagnostics/issues/148#issuecomment-369348961 ### Node.js 10+ -Node.js 10.x addresses the issue with Turbofan using the `--interpreted-frames-native-stack` flag. + +A versão 10.x do Node.js trata o problema com o turboFan usando a flag `--interpreted-frames-native-stack`. -Run `node --interpreted-frames-native-stack --perf-basic-prof-only-functions` to get function names in the flame graph regardless of which pipeline V8 used to compile your JavaScript. + +Execute `node --interpreted-frames-native-stack --perf-basic-prof-only-functions` para obter os nomes das funções no gráfico independentemente de que tipo de pipeline o V8 usou para compilar seu código JavaScript. -### Broken labels in the flame graph +### Labels quebradas no gráfico + + +Se você está vendo labels parecidas com essas: -If you're seeing labels looking like this ``` node`_ZN2v88internal11interpreter17BytecodeGenerator15VisitStatementsEPNS0_8ZoneListIPNS0_9StatementEEE ``` -it means the Linux perf you're using was not compiled with demangle support, see https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1396654 for example + + +Significa que a versão do perf que voc6e está usando no Linux não foi compilada com o support para o demangle, veja https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1396654 para exemplos. -## Examples +## Exemplos -Practice capturing flame graphs yourself with [a flame graph exercise](https://github.com/naugtur/node-example-flamegraph)! + +Pratique capturando amostras e gerando flame graphs você mesmo com [este exercício](https://github.com/naugtur/node-example-flamegraph)! From 58a233951259c77c011a6c06afd27dac3bf97aa7 Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Thu, 5 Sep 2019 12:31:45 -0300 Subject: [PATCH 15/17] Fix linter errors Thanks to @MaledongGit --- locale/pt-br/docs/guides/abi-stability.md | 0 locale/pt-br/docs/guides/blocking-vs-non-blocking.md | 0 locale/pt-br/docs/guides/debugging-getting-started.md | 10 +++++----- locale/pt-br/docs/guides/diagnostics-flamegraph.md | 6 +++--- locale/pt-br/docs/meta/topics/dependencies.md | 0 5 files changed, 8 insertions(+), 8 deletions(-) mode change 100644 => 100755 locale/pt-br/docs/guides/abi-stability.md mode change 100644 => 100755 locale/pt-br/docs/guides/blocking-vs-non-blocking.md mode change 100644 => 100755 locale/pt-br/docs/guides/debugging-getting-started.md mode change 100644 => 100755 locale/pt-br/docs/guides/diagnostics-flamegraph.md mode change 100644 => 100755 locale/pt-br/docs/meta/topics/dependencies.md diff --git a/locale/pt-br/docs/guides/abi-stability.md b/locale/pt-br/docs/guides/abi-stability.md old mode 100644 new mode 100755 diff --git a/locale/pt-br/docs/guides/blocking-vs-non-blocking.md b/locale/pt-br/docs/guides/blocking-vs-non-blocking.md old mode 100644 new mode 100755 diff --git a/locale/pt-br/docs/guides/debugging-getting-started.md b/locale/pt-br/docs/guides/debugging-getting-started.md old mode 100644 new mode 100755 index 5ee312dc9f5da..fbac597007285 --- a/locale/pt-br/docs/guides/debugging-getting-started.md +++ b/locale/pt-br/docs/guides/debugging-getting-started.md @@ -112,7 +112,7 @@ info on these follows: Muitas ferramentas comerciais e open source podem se conectar ao inspetor do Node. Aqui estão as informações básicas sobre eles: -#### [node-inspect](https://github.com/nodejs/node-inspect) +### [node-inspect](https://github.com/nodejs/node-inspect) Significa que a versão do perf que voc6e está usando no Linux não foi compilada com o support para o demangle, veja https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1396654 para exemplos. - ## Exemplos From 4d92fe5512636fb089387a80a6ce2f58d17bd039 Mon Sep 17 00:00:00 2001 From: Lucas Santos Date: Thu, 5 Sep 2019 12:41:48 -0300 Subject: [PATCH 17/17] Fix linter errors on untranslated files --- .../guides/anatomy-of-an-http-transaction.md | 2 -- .../docs/guides/backpressuring-in-streams.md | 3 +-- .../guides/buffer-constructor-deprecation.md | 16 ++++++------ locale/pt-br/docs/guides/domain-postmortem.md | 6 ----- .../docs/guides/dont-block-the-event-loop.md | 26 +++++++++---------- 5 files changed, 22 insertions(+), 31 deletions(-) diff --git a/locale/pt-br/docs/guides/anatomy-of-an-http-transaction.md b/locale/pt-br/docs/guides/anatomy-of-an-http-transaction.md index 289514b9c5537..e7cdb03e47bf4 100644 --- a/locale/pt-br/docs/guides/anatomy-of-an-http-transaction.md +++ b/locale/pt-br/docs/guides/anatomy-of-an-http-transaction.md @@ -401,8 +401,6 @@ From these basics, Node.js HTTP servers for many typical use cases can be constructed. There are plenty of other things these APIs provide, so be sure to read through the API docs for [`EventEmitters`][], [`Streams`][], and [`HTTP`][]. - - [`EventEmitters`]: https://nodejs.org/api/events.html [`Streams`]: https://nodejs.org/api/stream.html [`createServer`]: https://nodejs.org/api/http.html#http_http_createserver_requestlistener diff --git a/locale/pt-br/docs/guides/backpressuring-in-streams.md b/locale/pt-br/docs/guides/backpressuring-in-streams.md index e77c834c8e47b..2097e2c56bcee 100644 --- a/locale/pt-br/docs/guides/backpressuring-in-streams.md +++ b/locale/pt-br/docs/guides/backpressuring-in-streams.md @@ -60,7 +60,7 @@ In one scenario, we will take a large file (approximately ~9gb) and compress it using the familiar [`zip(1)`][] tool. ``` -$ zip The.Matrix.1080p.mkv +zip The.Matrix.1080p.mkv ``` While that will take a few minutes to complete, in another shell we may run @@ -592,7 +592,6 @@ Be sure to read up more on [`Stream`][] for other API functions to help improve and unleash your streaming capabilities when building an application with Node.js. - [`Stream`]: https://nodejs.org/api/stream.html [`Buffer`]: https://nodejs.org/api/buffer.html [`EventEmitters`]: https://nodejs.org/api/events.html diff --git a/locale/pt-br/docs/guides/buffer-constructor-deprecation.md b/locale/pt-br/docs/guides/buffer-constructor-deprecation.md index 5d07bb4ea7595..1516cd15d46df 100644 --- a/locale/pt-br/docs/guides/buffer-constructor-deprecation.md +++ b/locale/pt-br/docs/guides/buffer-constructor-deprecation.md @@ -9,7 +9,7 @@ layout: docs.hbs This guide explains how to migrate to safe `Buffer` constructor methods. The migration fixes the following deprecation warning: -
+
The Buffer() and new Buffer() constructors are not recommended for use due to security and usability concerns. Please use the new Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() construction methods instead.
@@ -204,13 +204,13 @@ const buf = Buffer.alloc ? Buffer.alloc(number) : new Buffer(number).fill(0); ## Regarding `Buffer.allocUnsafe()` Be extra cautious when using `Buffer.allocUnsafe()`: - * Don't use it if you don't have a good reason to - * e.g. you probably won't ever see a performance difference for small buffers, in fact, those - might be even faster with `Buffer.alloc()`, - * if your code is not in the hot code path — you also probably won't notice a difference, - * keep in mind that zero-filling minimizes the potential risks. - * If you use it, make sure that you never return the buffer in a partially-filled state, - * if you are writing to it sequentially — always truncate it to the actual written length +* Don't use it if you don't have a good reason to + * e.g. you probably won't ever see a performance difference for small buffers, in fact, those + might be even faster with `Buffer.alloc()`, + * if your code is not in the hot code path — you also probably won't notice a difference, + * keep in mind that zero-filling minimizes the potential risks. +* If you use it, make sure that you never return the buffer in a partially-filled state, + * if you are writing to it sequentially — always truncate it to the actual written length Errors in handling buffers allocated with `Buffer.allocUnsafe()` could result in various issues, ranged from undefined behavior of your code to sensitive data (user input, passwords, certs) diff --git a/locale/pt-br/docs/guides/domain-postmortem.md b/locale/pt-br/docs/guides/domain-postmortem.md index 6426c2a73361a..fe8c213a938ba 100644 --- a/locale/pt-br/docs/guides/domain-postmortem.md +++ b/locale/pt-br/docs/guides/domain-postmortem.md @@ -99,7 +99,6 @@ automatically bubble. Unfortunately both these situations occur, leading to potentially confusing behavior that may even be prone to difficult to debug timing conflicts. - ### API Gaps While APIs based on using `EventEmitter` can use `bind()` and errback style @@ -109,7 +108,6 @@ wanted to support domains using a mechanism alternative to those mentioned they must manually implement domain support themselves. Instead of being able to leverage the implicit mechanisms already in place. - ### Error Propagation Propagating errors across nested domains is not straight forward, if even @@ -155,7 +153,6 @@ several asynchronous requests and each one then `write()`'s data back to the client many more errors will arise from attempting to `write()` to a closed handle. More on this in _Resource Cleanup on Exception_. - ### Resource Cleanup on Exception The following script contains a more complex example of properly cleaning up @@ -333,7 +330,6 @@ In the end, in terms of handling errors, domains aren't much more than a glorified `'uncaughtException'` handler. Except with more implicit and unobservable behavior by third-parties. - ### Resource Propagation Another use case for domains was to use it to propagate data along asynchronous @@ -408,7 +404,6 @@ user's callback is called. Also the instantiation of `DataStream` in the In short, for this to have a prayer of a chance usage would need to strictly adhere to a set of guidelines that would be difficult to enforce or test. - ## Performance Issues A significant deterrent from using domains is the overhead. Using node's @@ -433,7 +428,6 @@ will result in a 17% performance loss. Granted, this is for the optimized scenario of the benchmark, but I believe this demonstrates the necessity for a mechanism such as domain to be as cheap to run as possible. - ## Looking Ahead The domain module has been soft deprecated since Dec 2014, but has not yet been diff --git a/locale/pt-br/docs/guides/dont-block-the-event-loop.md b/locale/pt-br/docs/guides/dont-block-the-event-loop.md index e720f73b5d01b..da1408515080f 100644 --- a/locale/pt-br/docs/guides/dont-block-the-event-loop.md +++ b/locale/pt-br/docs/guides/dont-block-the-event-loop.md @@ -111,7 +111,7 @@ Example 1: A constant-time callback. app.get('/constant-time', (req, res) => { res.sendStatus(200); }); -``` +``` Example 2: An `O(n)` callback. This callback will run quickly for small `n` and more slowly for large `n`. @@ -126,7 +126,7 @@ app.get('/countToN', (req, res) => { res.sendStatus(200); }); -``` +``` Example 3: An `O(n^2)` callback. This callback will still run quickly for small `n`, but for large `n` it will run much more slowly than the previous `O(n)` example. @@ -227,19 +227,19 @@ These APIs are expensive, because they involve significant computation (encrypti In a server, *you should not use the following synchronous APIs from these modules*: - Encryption: - - `crypto.randomBytes` (synchronous version) - - `crypto.randomFillSync` - - `crypto.pbkdf2Sync` - - You should also be careful about providing large input to the encryption and decryption routines. + - `crypto.randomBytes` (synchronous version) + - `crypto.randomFillSync` + - `crypto.pbkdf2Sync` + - You should also be careful about providing large input to the encryption and decryption routines. - Compression: - - `zlib.inflateSync` - - `zlib.deflateSync` + - `zlib.inflateSync` + - `zlib.deflateSync` - File system: - - Do not use the synchronous file system APIs. For example, if the file you access is in a [distributed file system](https://en.wikipedia.org/wiki/Clustered_file_system#Distributed_file_systems) like [NFS](https://en.wikipedia.org/wiki/Network_File_System), access times can vary widely. + - Do not use the synchronous file system APIs. For example, if the file you access is in a [distributed file system](https://en.wikipedia.org/wiki/Clustered_file_system#Distributed_file_systems) like [NFS](https://en.wikipedia.org/wiki/Network_File_System), access times can vary widely. - Child process: - - `child_process.spawnSync` - - `child_process.execSync` - - `child_process.execFileSync` + - `child_process.spawnSync` + - `child_process.execSync` + - `child_process.execFileSync` This list is reasonably complete as of Node v9. @@ -449,7 +449,7 @@ Whether you use only the Node Worker Pool or maintain separate Worker Pool(s), y To do this, minimize the variation in Task times by using Task partitioning. -## The risks of npm modules +## The risks of npm modules While the Node core modules offer building blocks for a wide variety of applications, sometimes something more is needed. Node developers benefit tremendously from the [npm ecosystem](https://www.npmjs.com/), with hundreds of thousands of modules offering functionality to accelerate your development process. Remember, however, that the majority of these modules are written by third-party developers and are generally released with only best-effort guarantees. A developer using an npm module should be concerned about two things, though the latter is frequently forgotten.
FlagMeaning
FlagSignificado
--inspect
    -
  • Enable inspector agent
  • -
  • Listen on default address and port (127.0.0.1:9229)
  • +
  • Ative o agente do inspetor
  • +
  • Ouve no endereço e porta padrões (127.0.0.1:9229)
--inspect=[host:port]--inspect=[host:porta]
    -
  • Enable inspector agent
  • -
  • Bind to address or hostname host (default: 127.0.0.1)
  • -
  • Listen on port port (default: 9229)
  • +
  • Ativa o agente do inspetor
  • +
  • Faz a conexão com o endereço ou hostname descrito em host (padrão: 127.0.0.1)
  • +
  • Ouve a porta descrita em porta (padrão: 9229)
--inspect-brk
    -
  • Enable inspector agent
  • -
  • Listen on default address and port (127.0.0.1:9229)
  • -
  • Break before user code starts
  • +
  • Ativa o agente do inspetor
  • +
  • Ouve no endereço e porta padrões (127.0.0.1:9229)
  • +
  • Pausa antes do código do usuário iniciar
--inspect-brk=[host:port]
    -
  • Enable inspector agent
  • -
  • Bind to address or hostname host (default: 127.0.0.1)
  • -
  • Listen on port port (default: 9229)
  • -
  • Break before user code starts
  • +
  • Ativa o agente do inspetor
  • +
  • Faz a conexão com o endereço ou hostname descrito em host (default: 127.0.0.1)
  • +
  • Ouve na porta descrita por porta (padrão: 9229)
  • +
  • Pausa antes do código do usuário iniciar
node inspect script.js
    -
  • Spawn child process to run user's script under --inspect flag; - and use main process to run CLI debugger.
  • +
  • Inicia um child process para executar um script do usuário sob a flag --inspect; e usa o processo principal para executar o CLI do debugger.
node inspect --port=xxxx script.js
    -
  • Spawn child process to run user's script under --inspect flag; - and use main process to run CLI debugger.
  • -
  • Listen on port port (default: 9229)
  • +
  • Inicia um child process para executar um script do usuário sob a flag --inspect; e usa o processo principal para executar o CLI do debugger.
  • +
  • Ouve na porta descrita por porta (padrão: 9229)