diff --git a/locale/.vs/VSWorkspaceState.json b/locale/.vs/VSWorkspaceState.json new file mode 100644 index 0000000000000..6b6114114f4e8 --- /dev/null +++ b/locale/.vs/VSWorkspaceState.json @@ -0,0 +1,6 @@ +{ + "ExpandedNodes": [ + "" + ], + "PreviewInSolutionExplorer": false +} \ No newline at end of file diff --git a/locale/.vs/locale/v15/.suo b/locale/.vs/locale/v15/.suo new file mode 100644 index 0000000000000..8e8589cdc482f Binary files /dev/null and b/locale/.vs/locale/v15/.suo differ diff --git a/locale/.vs/slnx.sqlite b/locale/.vs/slnx.sqlite new file mode 100644 index 0000000000000..e6af5246230fa Binary files /dev/null and b/locale/.vs/slnx.sqlite differ diff --git a/locale/fa/404.md b/locale/fa/404.md new file mode 100644 index 0000000000000..14c482bdfffba --- /dev/null +++ b/locale/fa/404.md @@ -0,0 +1,7 @@ +--- +layout: page.hbs +permalink: false +title: 404 +--- +## 404: Page could not be found +### ENOENT: no such file or directory diff --git a/locale/fa/about/community.md b/locale/fa/about/community.md new file mode 100644 index 0000000000000..895fa82ab6c23 --- /dev/null +++ b/locale/fa/about/community.md @@ -0,0 +1,56 @@ +--- +title: Community Committee +layout: about.hbs +--- + +# Community Committee + +The Community Committee (CommComm) is a top-level committee in the Node.js Foundation. The CommComm has authority over outward-facing community outreach efforts, including: + - Community [Evangelism](https://github.com/nodejs/evangelism) + - Education Initiatives + - Cultural Direction of Node.js Foundation + - Community Organization Outreach + - Translation and Internationalization + - Project Moderation/Mediation + - Public Outreach and [Publications](https://medium.com/the-node-js-collection) + +There are four types of involvement with the Community Committee: + + - A **Contributor** is any individual creating or commenting on an issue or pull request. + - A **Collaborator** is a contributor who has been given write access to the repository + - An **Observer** is any individual who has requested or been requested to attend a CommComm meeting. It is also the first step to becoming a Member. + - A **Member** is a collaborator with voting rights who has met the requirements of participation and voted in by the CommComm voting process. + +For the current list of Community Committee members, see the project's [README.md](https://github.com/nodejs/community-committee). + +## Contributors and Collaborators + +It is the mission of CommComm to further build out the Node.js Community. If you're reading this, you're already a part of that community – and as a part of the Node.js Community, we'd love to have your help! + +The [nodejs/community-committee](https://github.com/nodejs/community-committee) GitHub repository is a great place to start. Check out the [issues labeled "Good first issue"](https://github.com/nodejs/community-committee/labels/good%20first%20issue) to see where we're looking for help. If you have your own ideas on how we can engage and build the community, feel free to open your own issues, create pull requests with improvements to our existing work, or help us by sharing your thoughts and ideas in the ongoing discussions we're having in GitHub. + +You can further participate in our ongoing efforts around community building - like localization, evangelism, the Node.js Collection, and others - by digging into their respective repositories and getting involved! + +Before diving in, please be sure to read the [Collaborator Guide](https://github.com/nodejs/community-committee/blob/master/COLLABORATOR_GUIDE.md). + +If you're interested in participating in the Community Committee as a committee member, you should read the section below on **Observers and Membership**, and create an issue asking to be an Observer in our next Community Committee meeting. You can find a great example of such an issue [here](https://github.com/nodejs/community-committee/issues/142). + +## Observers and Membership + +If you're interested in becoming more deeply involved with the Community Committee and its projects, we encourage you to become an active observer, and work toward achieving member status. To become a member you must: + + 1. Attend the bi-weekly meetings, investigate issues tagged as good first issue, file issues and pull requests, and provide insight via GitHub as a contributor or collaborator. + 2. Request to become an Observer by filing an issue. Once added as an Observer to meetings, we will track attendance and participation for 3 months, in accordance with our governance guidelines. You can find a great example of such an issue [here](https://github.com/nodejs/community-committee/issues/142). + 3. When you meet the 3 month minimum attendance, and participation expectations, the CommComm will vote to add you as a member. + +Membership is for 6 months. The group will ask on a regular basis if the expiring members would like to stay on. A member just needs to reply to renew. There is no fixed size of the CommComm. However, the expected target is between 9 and 12. You can read more about membership, and other administrative details, in our [Governance Guide](https://github.com/nodejs/community-committee/blob/master/GOVERNANCE.md). + +Regular CommComm meetings are held bi-monthly in a Zoom video conference, and broadcast live to the public on YouTube. Any community member or contributor can ask that something be added to the next meeting's agenda by logging a GitHub Issue. + +Meeting announcements and agendas are posted before the meeting begins in the organization's [GitHub issues](https://github.com/nodejs/community-committee/issues). You can also find the regularly scheduled meetings on the [Node.js Calendar](https://nodejs.org/calendar). To follow Node.js meeting livestreams on YouTube, subscribe to the Node.js Foundation [YouTube channel](https://www.youtube.com/channel/UCQPYJluYC_sn_Qz_XE-YbTQ). Be sure to click the bell to be notified of new videos! + +## Consensus Seeking Process + +The CommComm follows a [Consensus Seeking](https://en.wikipedia.org/wiki/Consensus-seeking_decision-making) decision making model. + +When an agenda item has appeared to reach a consensus, the moderator will ask "Does anyone object?" as a final call for dissent from the consensus. If a consensus cannot be reached that has no objections then a majority wins vote is called. It is expected that the majority of decisions made by the CommComm are via a consensus seeking process and that voting is only used as a last-resort. \ No newline at end of file diff --git a/locale/fa/about/governance.md b/locale/fa/about/governance.md new file mode 100644 index 0000000000000..720a6fe30d06c --- /dev/null +++ b/locale/fa/about/governance.md @@ -0,0 +1,139 @@ +--- +title: Project Governance +layout: about.hbs +--- +# Project Governance + +## Technical Steering Committee + +The project is jointly governed by a Technical Steering Committee (TSC) +which is responsible for high-level guidance of the project. + +The TSC has final authority over this project including: + +* Technical direction +* Project governance and process (including this policy) +* Contribution policy +* GitHub repository hosting +* Conduct guidelines +* Maintaining the list of additional Collaborators + +Initial membership invitations to the TSC were given to individuals who +had been active contributors, and who have significant +experience with the management of the project. Membership is +expected to evolve over time according to the needs of the project. + +For the current list of TSC members, see the project +[README.md](https://github.com/nodejs/node/blob/master/README.md#tsc-technical-steering-committee). + +## Collaborators + +The [nodejs/node](https://github.com/nodejs/node) GitHub repository is +maintained by the TSC and additional Collaborators who are added by the +TSC on an ongoing basis. + +Individuals making significant and valuable contributions are made +Collaborators and given commit-access to the project. These +individuals are identified by the TSC and their addition as +Collaborators is discussed during the weekly TSC meeting. + +_Note:_ If you make a significant contribution and are not considered +for commit-access, log an issue or contact a TSC member directly and it +will be brought up in the next TSC meeting. + +Modifications of the contents of the nodejs/node repository are made on +a collaborative basis. Anybody with a GitHub account may propose a +modification via pull request and it will be considered by the project +Collaborators. All pull requests must be reviewed and accepted by a +Collaborator with sufficient expertise who is able to take full +responsibility for the change. In the case of pull requests proposed +by an existing Collaborator, an additional Collaborator is required +for sign-off. Consensus should be sought if additional Collaborators +participate and there is disagreement around a particular +modification. See _Consensus Seeking Process_ below for further detail +on the consensus model used for governance. + +Collaborators may opt to elevate significant or controversial +modifications, or modifications that have not found consensus to the +TSC for discussion by assigning the ***tsc-agenda*** tag to a pull +request or issue. The TSC should serve as the final arbiter where +required. + +For the current list of Collaborators, see the project +[README.md](https://github.com/nodejs/node/blob/master/README.md#current-project-team-members). + +A guide for Collaborators is maintained in +[COLLABORATOR_GUIDE.md](https://github.com/nodejs/node/blob/master/COLLABORATOR_GUIDE.md). + +## TSC Membership + +TSC seats are not time-limited. There is no fixed size of the TSC. +However, the expected target is between 6 and 12, to ensure adequate +coverage of important areas of expertise, balanced with the ability to +make decisions efficiently. + +There is no specific set of requirements or qualifications for TSC +membership beyond these rules. + +The TSC may add additional members to the TSC by a standard TSC motion. + +A TSC member may be removed from the TSC by voluntary resignation, or by +a standard TSC motion. + +Changes to TSC membership should be posted in the agenda, and may be +suggested as any other agenda item (see "TSC Meetings" below). + +No more than 1/3 of the TSC members may be affiliated with the same +employer. If removal or resignation of a TSC member, or a change of +employment by a TSC member, creates a situation where more than 1/3 of +the TSC membership shares an employer, then the situation must be +immediately remedied by the resignation or removal of one or more TSC +members affiliated with the over-represented employer(s). + +## TSC Meetings + +The TSC meets weekly on a Google Hangout On Air. The meeting is run by +a designated moderator approved by the TSC. Each meeting should be +published to YouTube. + +Items are added to the TSC agenda which are considered contentious or +are modifications of governance, contribution policy, TSC membership, +or release process. + +The intention of the agenda is not to approve or review all patches. +That should happen continuously on GitHub and be handled by the larger +group of Collaborators. + +Any community member or contributor can ask that something be added to +the next meeting's agenda by logging a GitHub Issue. Any Collaborator, +TSC member or the moderator can add the item to the agenda by adding +the ***tsc-agenda*** tag to the issue. + +Prior to each TSC meeting, the moderator will share the Agenda with +members of the TSC. TSC members can add any items they like to the +agenda at the beginning of each meeting. The moderator and the TSC +cannot veto or remove items. + +The TSC may invite persons or representatives from certain projects to +participate in a non-voting capacity. These invitees currently are: + +* A representative from [build](https://github.com/node-forward/build) + chosen by that project. + +The moderator is responsible for summarizing the discussion of each +agenda item and sending it as a pull request after the meeting. + +## Consensus Seeking Process + +The TSC follows a +[Consensus Seeking](http://en.wikipedia.org/wiki/Consensus-seeking_decision-making) +decision making model. + +When an agenda item has appeared to reach a consensus, the moderator +will ask "Does anyone object?" as a final call for dissent from the +consensus. + +If an agenda item cannot reach a consensus, a TSC member can call for +either a closing vote or a vote to table the issue to the next +meeting. The call for a vote must be approved by a majority of the TSC +or else the discussion will continue. Simple majority wins. diff --git a/locale/fa/about/index.md b/locale/fa/about/index.md new file mode 100644 index 0000000000000..b22b176df4a7c --- /dev/null +++ b/locale/fa/about/index.md @@ -0,0 +1,69 @@ +--- +layout: about.hbs +title: About +trademark: Trademark +--- +# About Node.js® + +As an asynchronous event driven JavaScript runtime, Node is designed to build +scalable network applications. In the following "hello world" example, many +connections can be handled concurrently. Upon each connection the callback is +fired, but if there is no work to be done, Node will sleep. + +```javascript +const http = require('http'); + +const hostname = '127.0.0.1'; +const port = 3000; + +const server = http.createServer((req, res) => { + res.statusCode = 200; + res.setHeader('Content-Type', 'text/plain'); + res.end('Hello World\n'); +}); + +server.listen(port, hostname, () => { + console.log(`Server running at http://${hostname}:${port}/`); +}); +``` + +This is in contrast to today's more common concurrency model where OS threads +are employed. Thread-based networking is relatively inefficient and very +difficult to use. Furthermore, users of Node are free from worries of +dead-locking the process, since there are no locks. Almost no function in Node +directly performs I/O, so the process never blocks. Because nothing blocks, +scalable systems are very reasonable to develop in Node. + +If some of this language is unfamiliar, there is a full article on +[Blocking vs Non-Blocking][]. + +--- + +Node is similar in design to, and influenced by, systems like Ruby's +[Event Machine][] or Python's [Twisted][]. Node takes the event model a bit +further. It presents an [event loop][] as a runtime construct instead of as a library. In other systems there is always a blocking call to start the +event-loop. +Typically behavior is defined through callbacks at the beginning of a script +and at the end starts a server through a blocking call like +`EventMachine::run()`. In Node there is no such start-the-event-loop call. Node +simply enters the event loop after executing the input script. Node exits the +event loop when there are no more callbacks to perform. This behavior is like +browser JavaScript — the event loop is hidden from the user. + +HTTP is a first class citizen in Node, designed with streaming and low latency +in mind. This makes Node well suited for the foundation of a web library or +framework. + +Just because Node is designed without threads, doesn't mean you cannot take +advantage of multiple cores in your environment. Child processes can be spawned +by using our [`child_process.fork()`][] API, and are designed to be easy to +communicate with. Built upon that same interface is the [`cluster`][] module, +which allows you to share sockets between processes to enable load balancing +over your cores. + +[Blocking vs Non-Blocking]: https://nodejs.org/en/docs/guides/blocking-vs-non-blocking/ +[`child_process.fork()`]: https://nodejs.org/api/child_process.html#child_process_child_process_fork_modulepath_args_options +[`cluster`]: https://nodejs.org/api/cluster.html +[event loop]: https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/ +[Event Machine]: https://github.com/eventmachine/eventmachine +[Twisted]: http://twistedmatrix.com/ diff --git a/locale/fa/about/privacy.md b/locale/fa/about/privacy.md new file mode 100644 index 0000000000000..7a9e098289fe7 --- /dev/null +++ b/locale/fa/about/privacy.md @@ -0,0 +1,94 @@ +--- +title: Privacy Policy +layout: about.hbs +--- +# Privacy Policy + +NODE.JS FOUNDATION (the "Foundation”) is committed to protecting the privacy of its users. This Privacy Policy (or the “Policy”) applies to its websites (whether currently or in the future supported, hosted or maintained, including without limitation nodejs.org, the “Sites”) and describes the information the Foundation collects about users of the Sites (“users”) and how that information may be used. + +Read the Privacy Policy carefully. By using any Site, you will be deemed to have accepted the terms of the Policy. If you do not agree to accept the terms of the Privacy Policy, you are directed to discontinue accessing or otherwise using the Sites or any materials obtained from the Sites. + +## Changes to the Privacy Policy +The Foundation reserves the right to update and change this Privacy Policy from time to time. Each time a user uses the Sites, the current version of the Privacy Policy applies. Accordingly, a user should check the date of this Privacy Policy (which appears at the top) and review for any changes since the last version. If a user does not agree to the Privacy Policy, the user should not use any of the Sites. Continued use any of the Sites following any revision of this Privacy Policy constitutes an acceptance of any change. + +## What Does this Privacy Policy Cover? +This Privacy Policy covers the Foundation’s treatment of aggregate information collected by the Sites and personal information that you provide in connection with your use of the Sites. This Policy does not apply to the practices of third parties that the Foundation does not own or control, including but not limited to third party services you access through the Foundation, or to individuals that the Foundation does not employ or manage. + +## Children Under 13 Years of Age +Unless specifically indicated within a Site, the Sites are not intended for minor children not of age (including without limitation those under 13), and they should not use the Sites. If you are under 18, you may use the Site only with involvement of a parent or guardian or if you are an emancipated minor. Except as specifically indicated within a Site, we do not knowingly collect or solicit information from, market to or accept services from children. If we become aware that a child under 13 has provided us with personal information without parental consent, we will take reasonable steps to remove such information and terminate the child’s account. If you become aware that a child has provided us with personally identifiable information without parental consent, please contact us at privacy@nodejs.org so we may remove the information. + +## Information About Users that the Foundation Collects +On the Sites, users may order products or services, and register to receive materials. Information collected on the Sites includes community forum content, diaries, profiles, photographs, names, unique identifiers (e.g., social media handles or usernames), contact and billing information (e.g., email address, postal address, telephone, fax), and transaction information. In order to access certain personalized services on the Sites, you may be asked to also create and store a username and password for an account from the Foundation. + +In order to tailor the Foundation’s subsequent communications to users and continuously improve the Sites’ products and services, the Foundation may also ask users to provide information regarding their interests, demographics, experience and detailed contact preferences. the Foundation and third party advertising companies may track information concerning a user’s use of the Sites, such as a user’s IP address. + +## How the Foundation Uses the Information Collected +The Foundation may use collected information for any lawful purpose related to the Foundation’s business, including, but not limited to: + +- To understand a user’s needs and create content that is relevant to the user; +- To generate statistical studies; +- To conduct market research and planning by sending user surveys; +- To notify user referrals of services, information, or products when a user requests that the Foundation send such information to referrals; +- To improve services, information, and products; +- To help a user complete a transaction, or provide services or customer support; +- To communicate back to the user; +- To update the user on services, information, and products; +- To personalize a Site for the user; +- To notify the user of any changes with a Site that may affect the user; +- To enforce terms of use on a Site; and +- To allow the user to purchase products, access services, or otherwise engage in activities the user selects. + +User names, identifications ("IDs"), and email addresses (as well as any additional information that a user may choose to post) may be publicly available on a Site when users voluntarily and publicly disclose personal information, such as when a user posts information in conjunction with content subject to an Open Source license, or as part of a message posted to a public forum or a publicly released software application. The personal information you may provide to the Foundation may reveal or allow others to discern aspects of your life that are not expressly stated in your profile (for example, your picture or your name may reveal your hair color, race or approximate age). By providing personal information to us when you create or update your account and profile or post a photograph, you are expressly and voluntarily accepting our Terms of Use and freely accepting and agreeing to our processing of your personal information in ways set out by this Privacy Policy. Supplying information to us, including any information deemed “sensitive” by applicable law, is entirely voluntary on your part. You may withdraw your consent to the Foundation’s collection and processing of your information by closing your account. You should be aware that your information may continue to be viewable to others after you close your account, such as on cached pages on Internet search engines. Users may not be able to change or remove public postings once posted. Such information may be used by visitors of these pages to send unsolicited messages. The Foundation is not responsible for any consequences which may occur from the third-party use of information that a user chooses to submit to public pages. + +## Opt Out +A user will always be able to make the decision whether to proceed with any activity that requests personal information including personally identifiable information. If a user does not provide requested information, the user may not be able to complete certain transactions. + +Users are not licensed to add other users to a Site (even users who entered into transactions with them) or to their mailing lists without written consent. +The Foundation encourages users to evaluate privacy and security policies of any of the Sites’ transaction partners before entering into transactions or choosing to disclose information. + +## Email +The Foundation may use (or provide to The Linux Foundation or other third party contractors to use) contact information received by the Foundation to email any user with respect to any Foundation or project of The Linux Foundation (a “Project”) opportunity, event or other matter. + +If a user no longer wishes to receive emails from the Foundation or any Project or any Site, the Foundation will (or, if applicable, have The Linux Foundation) provide instructions in each of its emails on how to be removed from any lists. The Foundation will make commercially reasonable efforts to honor such requests. + +## Photographs +Users may have the opportunity to submit photographs to the Sites for product promotions, contests, and other purposes to be disclosed at the time of request. In these circumstances, the Sites are designed to allow the public to view, download, save, and otherwise access the photographs posted. By submitting a photograph, users waive any privacy expectations users have with respect to the security of such photographs, and the Foundation’s use or exploitation of users’ likeness. You may submit a photograph only if you are the copyright holder or if you are authorized to do so under license by the copyright holder, and by submitting a photograph you agree to indemnify and hold the Foundation, its directors, officers, employees and agents harmless from any claims arising out of your submission. By submitting a photograph, you grant the Foundation a perpetual, worldwide, royalty-free license to use the photograph in any media now known of hereinafter invented for any business purpose that the Foundation, at its sole discretion, may decide. + +## Links to Third-Party Sites and Services +The Sites may permit you to access or link to third party websites and information on the Internet, and other websites may contain links to the Sites. When a user uses these links, the user leaves the Sites. The Foundation has not reviewed these third party sites, does not control, and is not responsible for, any of the third party sites, their content or privacy practices. The privacy and security practices of websites accessed from the Sites are not covered by this Privacy Policy, and the Foundation is not responsible for the privacy or security practices or the content of such websites, including but not limited to the third party services you access through the Foundation. If a user decides to access any of the linked sites, the Foundation encourages the user to read the privacy statements of those sites. The user accesses such sites at user’s own risk. + +We may receive information when you use your account to log into a third-party site or application in order to recommend tailored content or advertising to you and to improve your user experience on our site. We may provide reports containing aggregated impression information to third parties to measure Internet traffic and usage patterns. + +## Service Orders +To purchase services, users may be asked to be directed to a third party site, such as PayPal, to pay for their purchases. If applicable, the third party site may collect payment information directly to facilitate a transaction. The Foundation will only record the result of the transaction and any references to the transaction record provided by the third party site. The Foundation is not responsible for the services provided or information collected on such third party sites. + +## Sharing of Information +The Foundation may disclose personal or aggregate information that is associated with your profile as described in this Privacy Policy, as permitted by law or as reasonably necessary to: (1) comply with a legal requirement or process, including, but not limited to, civil and criminal subpoenas, court orders or other compulsory disclosures; (2) investigate and enforce this Privacy Policy or our then-current Terms of Use, if any; (3) respond to claims of a violation of the rights of third parties; (4) respond to customer service inquiries; (5) protect the rights, property, or safety of the Foundation, our users, or the public; or (6) as part of the sale of all or a portion of the assets of the Foundation or as a change in control of the organization or one of its affiliates or in preparation for any of these events. The Foundation reserves the right to supply any such information to any organization into which the Foundation may merge in the future or to which it may make any transfer. Any third party to which the Foundation transfers or sells all or any of its assets will have the right to use the personal and other information that you provide in the manner set out in this Privacy Policy. + +## Is Information About Me Secure? +To keep your information safe, prevent unauthorized access or disclosure, maintain data accuracy, and ensure the appropriate use of information, the Foundation implements industry-standard physical, electronic, and managerial procedures to safeguard and secure the information the Foundation collects. However, the Foundation does not guarantee that unauthorized third parties will never defeat measures taken to prevent improper use of personally identifiable information. + +Access to users’ nonpublic personally identifiable information is restricted to the Foundation and Linux Foundation personnel, including contractors for each such organization on a need-to-know basis. + +User passwords are keys to accounts. Use unique numbers, letters, and special characters for passwords and do not disclose passwords to other people in order to prevent loss of account control. Users are responsible for all actions taken in their accounts. Notify the Foundation of any password compromises, and change passwords periodically to maintain account protection. + +In the event the Foundation becomes aware that the security of a Site has been compromised or user’s personally identifiable information has been disclosed to unrelated third parties as a result of external activity, including but not limited to security attacks or fraud, the Foundation reserves the right to take reasonable appropriate measures, including but not limited to, investigation and reporting, and notification to and cooperation with law enforcement authorities. + +While our aim is to keep data from unauthorized or unsafe access, modification or destruction, no method of transmission on the Internet, or method of electronic storage, is 100% secure and we cannot guarantee its absolute security. + +## Data Protection +Given the international scope of the Foundation, personal information may be visible to persons outside your country of residence, including to persons in countries that your own country’s privacy laws and regulations deem deficient in ensuring an adequate level of protection for such information. If you are unsure whether this privacy statement is in conflict with applicable local rules, you should not submit your information. If you are located within the European Union, you should note that your information will be transferred to the United States, which is deemed by the European Union to have inadequate data protection. Nevertheless, in accordance with local laws implementing the European Union Privacy Directive on the protection of individuals with regard to the processing of personal data and on the free movement of such data, individuals located in countries outside of the United States of America who submit personal information do thereby consent to the general use of such information as provided in this Privacy Policy and to its transfer to and/or storage in the United States of America. By utilizing any Site and/or directly providing personal information to us, you hereby agree to and acknowledge your understanding of the terms of this Privacy Policy, and consent to have your personal data transferred to and processed in the United States and/or in other jurisdictions as determined by the Foundation, notwithstanding your country of origin, or country, state and/or province of residence. If you do not want your personal information collected and used by the Foundation, please do not visit or use the Sites. + +## Governing Law +This Privacy Policy is governed by the laws of the State of California, United States of America without giving any effect to the principles of conflicts of law. + +## California Privacy Rights +The California Online Privacy Protection Action (“CalOPPA”) permits customers who are California residents and who have provided the Foundation with “personal information” as defined in CalOPPA to request certain information about the disclosure of information to third parties for their direct marketing purposes. If you are a California resident with a question regarding this provision, please contact privacy@nodejs.org. + +Please note that the Foundation does not respond to “do not track” signals or other similar mechanisms intended to allow California residents to opt-out of Internet tracking under CalOPPA. The Foundation may track and/or disclose your online activities over time and across different websites to third parties when you use our services. + +## What to Do in the Event of Lost or Stolen Information +You must promptly notify us if you become aware that any information provided by or submitted to our Site or through our Product is lost, stolen, or used without permission at privacy@nodejs.org. + +## Questions or Concerns +If you have any questions or concerns regarding privacy at the Foundation, please send us a detailed message to [privacy@nodejs.org](mailto:privacy@nodejs.org). diff --git a/locale/fa/about/releases.md b/locale/fa/about/releases.md new file mode 100644 index 0000000000000..2028456e8c100 --- /dev/null +++ b/locale/fa/about/releases.md @@ -0,0 +1,86 @@ +--- +layout: about.hbs +title: Releases +--- +# Releases + +The core team defines the roadmap's scope, as informed by Node.js' community. +Releases happen as often as necessary and practical, but never before work is +complete. Bugs are unavoidable, but pressure to ship a release will never +prevail over ensuring the software is correct. The commitment to quality +software is a core tenet of the Node.js project. + +## Patches + +Patch releases: + +- Include bug, performance, and security fixes. +- Do not add nor change public interfaces. +- Do not alter the expected behavior of a given interface. +- Can correct behavior if it is out-of-sync with the documentation. +- Do not introduce changes which make seamless upgrades impossible. + +## Minors + +Minor releases: + +- Include additions and/or refinements of APIs and subsystems. +- Do not generally change APIs nor introduce backwards-incompatible breaking +changes, except where unavoidable. +- Are mostly additive releases. + +## Majors + +Major releases: + +- Usually introduce backwards-incompatible, breaking changes. +- Identify the API Node.js intends to support for the foreseeable future. +- Require conversation, care, collaboration and appropriate scoping by the team +and its users. + +## Scoping Features + +The team can add features and APIs into Node.js when: + +- The need is clear. +- The API or feature has known consumers. +- The API is clean, useful, and easy-to use. + +If when implementing core functionality for Node.js, the team or community may +identify another lower-level API which could have utility beyond Node.js. When +identified, Node.js can expose it for consumers. + +For example, consider the [`EventEmitter`] interface. The need to have an event +subscription model for core modules to consume was clear, and that abstraction +had utility beyond the Node.js core. It was not the case that its interface +couldn't be implemented externally to Node.js; instead, Node.js needed the +abstraction for itself, and also exposed it for use by Node.js consumers. + +Alternatively, it may be that many in the community adopt a pattern to handle +common needs which Node.js does not satisfy. It may be clear that Node.js +should deliver, by default, an API or feature for all Node.js consumers. +Another possibility is a commonly-used compiled asset which is difficult to +deliver across environments. Given this, Node.js may incorporate those changes +directly. + +The core team does not take the decision lightly to add a new API to Node.js. +Node.js has a strong commitment to backwards compatibility. As such, community +input and conversation must occur before the team takes action. Even if an API +is otherwise suitable for addition, the team must identify potential consumers. + +## Deprecation + +On occasion, the team must deprecate a feature or API of Node.js. Before coming +to any final conclusion, the team must identify the consumers of the API and how +they use it. Some questions to ask are: + +- If this API is widely used by the community, what is the need for flagging it +as deprecated? +- Do we have a replacement API, or is there a transitionary path? +- How long does the API remain deprecated before removal? +- Does an external module exist which its consumers can easily substitute? + +The team takes the same careful consideration when deprecating a Node.js API as +they do when adding another. + +[`EventEmitter`]: https://nodejs.org/api/events.html#events_class_eventemitter diff --git a/locale/fa/about/resources.md b/locale/fa/about/resources.md new file mode 100644 index 0000000000000..1686722289027 --- /dev/null +++ b/locale/fa/about/resources.md @@ -0,0 +1,31 @@ +--- +layout: about.hbs +title: Logos and Graphics +--- +# Resources + +## Logo Downloads + + Please review the [trademark policy](/about/trademark/) for information about permissible use of Node.js® logos and marks. + + Guidelines for the visual display of the Node.js mark are described in + the [Visual Guidelines](/static/documents/foundation-visual-guidelines.pdf). + +
![]() |
+ ![]() |
+
| Node.js standard AI | +Node.js reversed AI | +
![]() |
+ ![]() |
+
| Node.js standard with less color AI | +Node.js reversed with less color AI | +
This is a guest post by James "SubStack" Halliday, originally posted on his blog, and reposted here with permission.
Writing applications as a sequence of tiny services that all talk to each other over the network has many upsides, but it can be annoyingly tedious to get all the subsystems up and running.
+ +Running a seaport can help with getting all the services to talk to each other, but running the processes is another matter, especially when you have new code to push into production.
+ +fleet aims to make it really easy for anyone on your team to push new code from git to an armada of servers and manage all the processes in your stack.
+ +To start using fleet, just install the fleet command with npm:
+ +npm install -g fleet+ +
Then on one of your servers, start a fleet hub. From a fresh directory, give it a passphrase and a port to listen on:
+ +fleet hub --port=7000 --secret=beepboop+ +
Now fleet is listening on :7000 for commands and has started a git server on :7001 over http. There's no ssh keys or post commit hooks to configure, just run that command and you're ready to go!
+ +Next set up some worker drones to run your processes. You can have as many workers as you like on a single server but each worker should be run from a separate directory. Just do:
+ +fleet drone --hub=x.x.x.x:7000 --secret=beepboop+ +
where x.x.x.x is the address where the fleet hub is running. Spin up a few of these drones.
+ +Now navigate to the directory of the app you want to deploy. First set a remote so you don't need to type --hub and --secret all the time.
+ +fleet remote add default --hub=x.x.x.x:7000 --secret=beepboop+ +
Fleet just created a fleet.json file for you to save your settings.
+ +From the same app directory, to deploy your code just do:
+ +fleet deploy+ +
The deploy command does a git push to the fleet hub's git http server and then the hub instructs all the drones to pull from it. Your code gets checked out into a new directory on all the fleet drones every time you deploy.
+ +Because fleet is designed specifically for managing applications with lots of tiny services, the deploy command isn't tied to running any processes. Starting processes is up to the programmer but it's super simple. Just use the fleet spawn command:
+ +fleet spawn -- node server.js 8080+ +
By default fleet picks a drone at random to run the process on. You can specify which drone you want to run a particular process on with the --drone switch if it matters.
+ +Start a few processes across all your worker drones and then show what is running with the fleet ps command:
+ +fleet ps +drone#3dfe17b8 +├─┬ pid#1e99f4 +│ ├── status: running +│ ├── commit: webapp/1b8050fcaf8f1b02b9175fcb422644cb67dc8cc5 +│ └── command: node server.js 8888 +└─┬ pid#d7048a + ├── status: running + ├── commit: webapp/1b8050fcaf8f1b02b9175fcb422644cb67dc8cc5 + └── command: node server.js 8889+ +
Now suppose that you have new code to push out into production. By default, fleet lets you spin up new services without disturbing your existing services. If you fleet deploy again after checking in some new changes to git, the next time you fleet spawn a new process, that process will be spun up in a completely new directory based on the git commit hash. To stop a process, just use fleet stop.
+ +This approach lets you verify that the new services work before bringing down the old services. You can even start experimenting with heterogeneous and incremental deployment by hooking into a custom http proxy!
+ +Even better, if you use a service registry like seaport for managing the host/port tables, you can spin up new ad-hoc staging clusters all the time without disrupting the normal operation of your site before rolling out new code to users.
+ +Fleet has many more commands that you can learn about with its git-style manpage-based help system! Just do fleet help to get a list of all the commands you can run.
+ +fleet help +Usage: fleet <command> [<args>] + +The commands are: + deploy Push code to drones. + drone Connect to a hub as a worker. + exec Run commands on drones. + hub Create a hub for drones to connect. + monitor Show service events system-wide. + ps List the running processes on the drones. + remote Manage the set of remote hubs. + spawn Run services on drones. + stop Stop processes running on drones. + +For help about a command, try `fleet help `.+ +
npm install -g fleet and check out the code on github!
+ +
diff --git a/locale/fa/blog/module/service-logging-in-json-with-bunyan.md b/locale/fa/blog/module/service-logging-in-json-with-bunyan.md
new file mode 100644
index 0000000000000..4e2692e78f748
--- /dev/null
+++ b/locale/fa/blog/module/service-logging-in-json-with-bunyan.md
@@ -0,0 +1,340 @@
+---
+title: Service logging in JSON with Bunyan
+author: trentmick
+date: 2012-03-28T19:25:26.000Z
+status: publish
+category: module
+slug: service-logging-in-json-with-bunyan
+layout: blog-post.hbs
+---
+
+
+
+Service logs are gold, if you can mine them. We scan them for occasional debugging. Perhaps we grep them looking for errors or warnings, or setup an occasional nagios log regex monitor. If that. This is a waste of the best channel for data about a service.
+ +"Log. (Huh) What is it good for. Absolutely ..."
+ +These are what logs are good for. The current state of logging is barely adequate for the first of these. Doing reliable analysis, and even monitoring, of varied "printf-style" logs is a grueling or hacky task that most either don't bother with, fallback to paying someone else to do (viz. Splunk's great successes), or, for web sites, punt and use the plethora of JavaScript-based web analytics tools.
+ +Let's log in JSON. Let's format log records with a filter outside the app. Let's put more info in log records by not shoehorning into a printf-message. Debuggability can be improved. Monitoring and analysis can definitely be improved. Let's not write another regex-based parser, and use the time we've saved writing tools to collate logs from multiple nodes and services, to query structured logs (from all services, not just web servers), etc.
+ +At Joyent we use node.js for running many core services -- loosely coupled through HTTP REST APIs and/or AMQP. In this post I'll draw on experiences from my work on Joyent's SmartDataCenter product and observations of Joyent Cloud operations to suggest some improvements to service logging. I'll show the (open source) Bunyan logging library and tool that we're developing to improve the logging toolchain.
+ +# apache access log
+10.0.1.22 - - [15/Oct/2010:11:46:46 -0700] "GET /favicon.ico HTTP/1.1" 404 209
+fe80::6233:4bff:fe29:3173 - - [15/Oct/2010:11:46:58 -0700] "GET / HTTP/1.1" 200 44
+
+# apache error log
+[Fri Oct 15 11:46:46 2010] [error] [client 10.0.1.22] File does not exist: /Library/WebServer/Documents/favicon.ico
+[Fri Oct 15 11:46:58 2010] [error] [client fe80::6233:4bff:fe29:3173] File does not exist: /Library/WebServer/Documents/favicon.ico
+
+# Mac /var/log/secure.log
+Oct 14 09:20:56 banana loginwindow[41]: in pam_sm_authenticate(): Failed to determine Kerberos principal name.
+Oct 14 12:32:20 banana com.apple.SecurityServer[25]: UID 501 authenticated as user trentm (UID 501) for right 'system.privilege.admin'
+
+# an internal joyent agent log
+[2012-02-07 00:37:11.898] [INFO] AMQPAgent - Publishing success.
+[2012-02-07 00:37:11.910] [DEBUG] AMQPAgent - { req_id: '8afb8d99-df8e-4724-8535-3d52adaebf25',
+ timestamp: '2012-02-07T00:37:11.898Z',
+
+# typical expressjs log output
+[Mon, 21 Nov 2011 20:52:11 GMT] 200 GET /foo (1ms)
+Blah, some other unstructured output to from a console.log call.
+
+
+What're we doing here? Five logs at random. Five different date formats. As Paul Querna points out we haven't improved log parsability in 20 years. Parsability is enemy number one. You can't use your logs until you can parse the records, and faced with the above the inevitable solution is a one-off regular expression.
+ +The current state of the art is various parsing libs, analysis tools and homebrew scripts ranging from grep to Perl, whose scope is limited to a few niches log formats.
+ +JSON.parse() solves all that. Let's log in JSON. But it means a change in thinking: The first-level audience for log files shouldn't be a person, but a machine.
That is not said lightly. The "Unix Way" of small focused tools lightly coupled with text output is important. JSON is less "text-y" than, e.g., Apache common log format. JSON makes grep and awk awkward. Using less directly on a log is handy.
But not handy enough. That 80's pastel jumpsuit awkwardness you're feeling isn't the JSON, it's your tools. Time to find a json tool -- json is one, bunyan described below is another one. Time to learn your JSON library instead of your regex library: JavaScript, Python, Ruby, Java, Perl.
Time to burn your log4j Layout classes and move formatting to the tools side. Creating a log message with semantic information and throwing that away to make a string is silly. The win at being able to trivially parse log records is huge. The possibilities at being able to add ad hoc structured information to individual log records is interesting: think program state metrics, think feeding to Splunk, or loggly, think easy audit logs.
+ +Bunyan is a node.js module for logging in JSON and a bunyan CLI tool to view those logs.
Logging with Bunyan basically looks like this:
+ +$ cat hi.js
+var Logger = require('bunyan');
+var log = new Logger({name: 'hello' /*, ... */});
+log.info("hi %s", "paul");
+
+
+And you'll get a log record like this:
+ +$ node hi.js
+{"name":"hello","hostname":"banana.local","pid":40026,"level":30,"msg":"hi paul","time":"2012-03-28T17:25:37.050Z","v":0}
+
+
+Pipe that through the bunyan tool that is part of the "node-bunyan" install to get more readable output:
$ node hi.js | ./node_modules/.bin/bunyan # formatted text output
+[2012-02-07T18:50:18.003Z] INFO: hello/40026 on banana.local: hi paul
+
+$ node hi.js | ./node_modules/.bin/bunyan -j # indented JSON output
+{
+ "name": "hello",
+ "hostname": "banana.local",
+ "pid": 40087,
+ "level": 30,
+ "msg": "hi paul",
+ "time": "2012-03-28T17:26:38.431Z",
+ "v": 0
+}
+
+
+Bunyan is log4j-like: create a Logger with a name, call log.info(...), etc. However it has no intention of reproducing much of the functionality of log4j. IMO, much of that is overkill for the types of services you'll tend to be writing with node.js.
Let's walk through a bigger example to show some interesting things in Bunyan. We'll create a very small "Hello API" server using the excellent restify library -- which we used heavily here at Joyent. (Bunyan doesn't require restify at all, you can easily use Bunyan with Express or whatever.)
+ +You can follow along in https://github.com/trentm/hello-json-logging if you like. Note that I'm using the current HEAD of the bunyan and restify trees here, so details might change a bit. Prerequisite: a node 0.6.x installation.
+ +git clone https://github.com/trentm/hello-json-logging.git
+cd hello-json-logging
+make
+
+
+Our server first creates a Bunyan logger:
+ +var Logger = require('bunyan');
+var log = new Logger({
+ name: 'helloapi',
+ streams: [
+ {
+ stream: process.stdout,
+ level: 'debug'
+ },
+ {
+ path: 'hello.log',
+ level: 'trace'
+ }
+ ],
+ serializers: {
+ req: Logger.stdSerializers.req,
+ res: restify.bunyan.serializers.response,
+ },
+});
+
+
+Every Bunyan logger must have a name. Unlike log4j, this is not a hierarchical dotted namespace. It is just a name field for the log records.
+ +Every Bunyan logger has one or more streams, to which log records are written. Here we've defined two: logging at DEBUG level and above is written to stdout, and logging at TRACE and above is appended to 'hello.log'.
+ +Bunyan has the concept of serializers: a registry of functions that know how to convert a JavaScript object for a certain log record field to a nice JSON representation for logging. For example, here we register the Logger.stdSerializers.req function to convert HTTP Request objects (using the field name "req") to JSON. More on serializers later.
Restify 1.x and above has bunyan support baked in. You pass in your Bunyan logger like this:
+ +var server = restify.createServer({
+ name: 'Hello API',
+ log: log // Pass our logger to restify.
+});
+
+
+Our simple API will have a single GET /hello?name=NAME endpoint:
server.get({path: '/hello', name: 'SayHello'}, function(req, res, next) {
+ var caller = req.params.name || 'caller';
+ req.log.debug('caller is "%s"', caller);
+ res.send({"hello": caller});
+ return next();
+});
+
+
+If we run that, node server.js, and call the endpoint, we get the expected restify response:
$ curl -iSs http://0.0.0.0:8080/hello?name=paul
+HTTP/1.1 200 OK
+Access-Control-Allow-Origin: *
+Access-Control-Allow-Headers: Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version
+Access-Control-Expose-Headers: X-Api-Version, X-Request-Id, X-Response-Time
+Server: Hello API
+X-Request-Id: f6aaf942-c60d-4c72-8ddd-bada459db5e3
+Access-Control-Allow-Methods: GET
+Connection: close
+Content-Length: 16
+Content-MD5: Xmn3QcFXaIaKw9RPUARGBA==
+Content-Type: application/json
+Date: Tue, 07 Feb 2012 19:12:35 GMT
+X-Response-Time: 4
+
+{"hello":"paul"}
+
+
+Let's add two things to our server. First, we'll use the server.pre to hook into restify's request handling before routing where we'll log the request.
server.pre(function (request, response, next) {
+ request.log.info({req: request}, 'start'); // (1)
+ return next();
+});
+
+
+This is the first time we've seen this log.info style with an object as the first argument. Bunyan logging methods (log.trace, log.debug, ...) all support an optional first object argument with extra log record fields:
log.info(<object> fields, <string> msg, ...)
+
+
+Here we pass in the restify Request object, req. The "req" serializer we registered above will come into play here, but bear with me.
Remember that we already had this debug log statement in our endpoint handler:
+ +req.log.debug('caller is "%s"', caller); // (2)
+
+
+Second, use the restify server after event to log the response:
server.on('after', function (req, res, route) {
+ req.log.info({res: res}, "finished"); // (3)
+});
+
+
+Now lets see what log output we get when somebody hits our API's endpoint:
+ +$ curl -iSs http://0.0.0.0:8080/hello?name=paul
+HTTP/1.1 200 OK
+...
+X-Request-Id: 9496dfdd-4ec7-4b59-aae7-3fed57aed5ba
+...
+
+{"hello":"paul"}
+
+
+Here is the server log:
+ +[trentm@banana:~/tm/hello-json-logging]$ node server.js
+... intro "listening at" log message elided ...
+{"name":"helloapi","hostname":"banana.local","pid":40341,"level":30,"req":{"method":"GET","url":"/hello?name=paul","headers":{"user-agent":"curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3","host":"0.0.0.0:8080","accept":"*/*"},"remoteAddress":"127.0.0.1","remotePort":59831},"msg":"start","time":"2012-03-28T17:37:29.506Z","v":0}
+{"name":"helloapi","hostname":"banana.local","pid":40341,"route":"SayHello","req_id":"9496dfdd-4ec7-4b59-aae7-3fed57aed5ba","level":20,"msg":"caller is \"paul\"","time":"2012-03-28T17:37:29.507Z","v":0}
+{"name":"helloapi","hostname":"banana.local","pid":40341,"route":"SayHello","req_id":"9496dfdd-4ec7-4b59-aae7-3fed57aed5ba","level":30,"res":{"statusCode":200,"headers":{"access-control-allow-origin":"*","access-control-allow-headers":"Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version","access-control-expose-headers":"X-Api-Version, X-Request-Id, X-Response-Time","server":"Hello API","x-request-id":"9496dfdd-4ec7-4b59-aae7-3fed57aed5ba","access-control-allow-methods":"GET","connection":"close","content-length":16,"content-md5":"Xmn3QcFXaIaKw9RPUARGBA==","content-type":"application/json","date":"Wed, 28 Mar 2012 17:37:29 GMT","x-response-time":3}},"msg":"finished","time":"2012-03-28T17:37:29.510Z","v":0}
+
+
+Lets look at each in turn to see what is interesting -- pretty-printed with node server.js | ./node_modules/.bin/bunyan -j:
{ // (1)
+ "name": "helloapi",
+ "hostname": "banana.local",
+ "pid": 40442,
+ "level": 30,
+ "req": {
+ "method": "GET",
+ "url": "/hello?name=paul",
+ "headers": {
+ "user-agent": "curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3",
+ "host": "0.0.0.0:8080",
+ "accept": "*/*"
+ },
+ "remoteAddress": "127.0.0.1",
+ "remotePort": 59834
+ },
+ "msg": "start",
+ "time": "2012-03-28T17:39:44.880Z",
+ "v": 0
+}
+
+
+Here we logged the incoming request with request.log.info({req: request}, 'start'). The use of the "req" field triggers the "req" serializer registered at Logger creation.
Next the req.log.debug in our handler:
{ // (2)
+ "name": "helloapi",
+ "hostname": "banana.local",
+ "pid": 40442,
+ "route": "SayHello",
+ "req_id": "9496dfdd-4ec7-4b59-aae7-3fed57aed5ba",
+ "level": 20,
+ "msg": "caller is \"paul\"",
+ "time": "2012-03-28T17:39:44.883Z",
+ "v": 0
+}
+
+
+and the log of response in the "after" event:
+ +{ // (3)
+ "name": "helloapi",
+ "hostname": "banana.local",
+ "pid": 40442,
+ "route": "SayHello",
+ "req_id": "9496dfdd-4ec7-4b59-aae7-3fed57aed5ba",
+ "level": 30,
+ "res": {
+ "statusCode": 200,
+ "headers": {
+ "access-control-allow-origin": "*",
+ "access-control-allow-headers": "Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version",
+ "access-control-expose-headers": "X-Api-Version, X-Request-Id, X-Response-Time",
+ "server": "Hello API",
+ "x-request-id": "9496dfdd-4ec7-4b59-aae7-3fed57aed5ba",
+ "access-control-allow-methods": "GET",
+ "connection": "close",
+ "content-length": 16,
+ "content-md5": "Xmn3QcFXaIaKw9RPUARGBA==",
+ "content-type": "application/json",
+ "date": "Wed, 28 Mar 2012 17:39:44 GMT",
+ "x-response-time": 5
+ }
+ },
+ "msg": "finished",
+ "time": "2012-03-28T17:39:44.886Z",
+ "v": 0
+}
+
+
+Two useful details of note here:
+ +The last two log messages include a "req_id" field (added to the req.log logger by restify). Note that this is the same UUID as the "X-Request-Id" header in the curl response. This means that if you use req.log for logging in your API handlers you will get an easy way to collate all logging for particular requests.
If your's is an SOA system with many services, a best practice is to carry that X-Request-Id/req_id through your system to enable collating handling of a single top-level request.
The last two log messages include a "route" field. This tells you to which handler restify routed the request. While possibly useful for debugging, this can be very helpful for log-based monitoring of endpoints on a server.
Recall that we also setup all logging to go the "hello.log" file. This was set at the TRACE level. Restify will log more detail of its operation at the trace level. See my "hello.log" for an example. The bunyan tool does a decent job of nicely formatting multiline messages and "req"/"res" keys (with color, not shown in the gist).
This is logging you can use effectively.
+ +Bunyan is just one of many options for logging in node.js-land. Others (that I know of) supporting JSON logging are winston and logmagic. Paul Querna has an excellent post on using JSON for logging, which shows logmagic usage and also touches on topics like the GELF logging format, log transporting, indexing and searching.
+ +Parsing challenges won't ever completely go away, but it can for your logs if you use JSON. Collating log records across logs from multiple nodes is facilitated by a common "time" field. Correlating logging across multiple services is enabled by carrying a common "req_id" (or equivalent) through all such logs.
+ +Separate log files for a single service is an anti-pattern. The typical Apache example of separate access and error logs is legacy, not an example to follow. A JSON log provides the structure necessary for tooling to easily filter for log records of a particular type.
+ +JSON logs bring possibilities. Feeding to tools like Splunk becomes easy. Ad hoc fields allow for a lightly spec'd comm channel from apps to other services: records with a "metric" could feed to statsd, records with a "loggly: true" could feed to loggly.com.
+ +Here I've described a very simple example of restify and bunyan usage for node.js-based API services with easy JSON logging. Restify provides a powerful framework for robust API services. Bunyan provides a light API for nice JSON logging and the beginnings of tooling to help consume Bunyan JSON logs.
+ +Update (29-Mar-2012): Fix styles somewhat for RSS readers.
diff --git a/locale/fa/blog/nodejs-road-ahead.md b/locale/fa/blog/nodejs-road-ahead.md new file mode 100644 index 0000000000000..ca856d50e63f1 --- /dev/null +++ b/locale/fa/blog/nodejs-road-ahead.md @@ -0,0 +1,54 @@ +--- +title: Node.js and the Road Ahead +date: 2014-01-16T23:00:00.000Z +author: Timothy J Fontaine +slug: nodejs-road-ahead +layout: blog-post.hbs +--- +As the new project lead for Node.js I am excited for our future, and want to +give you an update on where we are. + +One of Node's major goals is to provide a small core, one that provides the +right amount of surface area for consumers to achieve and innovate, without +Node itself getting in the way. That ethos is alive and well, we're going to +continue to provide a small, simple, and stable set of APIs that facilitate the +amazing uses the community finds for Node. We're going to keep providing +backward compatible APIs, so code you write today will continue to work on +future versions of Node. And of course, performance tuning and bug fixing will +always be an important part of every release cycle. + +The release of Node v0.12 is imminent, and a lot of significant work has gone +into this release. There's streams3, a better keep alive agent for http, the vm +module is now based on contextify, and significant performance work done in +core features (Buffers, TLS, streams). We have a few APIs that are still being +ironed out before we can feature freeze and branch (execSync, AsyncListeners, +user definable instrumentation). We are definitely in the home stretch. + +But Node is far from done. In the short term there will be new releases of v8 +that we'll need to track, as well as integrating the new ABI stable C module +interface. There are interesting language features that we can use to extend +Node APIs (extend not replace). We need to write more tooling, we need to +expose more interfaces to further enable innovation. We can explore +functionality to embed Node in your existing project. + +The list can go on and on. Yet, Node is larger than the software itself. Node +is also the community, the businesses, the ecosystems, and their related +events. With that in mind there are things we can work to improve. + +The core team will be improving its procedures such that we can quickly and +efficiently communicate with you. We want to provide high quality and timely +responses to issues, describe our development roadmap, as well as provide our +progress during each release cycle. We know you're interested in our plans for +Node, and it's important we're able to provide that information. Communication +should be bidirectional: we want to continue to receive feedback about how +you're using Node, and what your pain points are. + +After the release of v0.12 we will facilitate the community to contribute and +curate content for nodejs.org. Allowing the community to continue to invest in +Node will ensure nodejs.org is an excellent starting point and the primary +resource for tutorials, documentation, and materials regarding Node. We have an +awesome and engaged community, and they're paramount to our success. + +I'm excited for Node's future, to see new and interesting use cases, and to +continue to help businesses scale and innovate with Node. We have a lot we can +accomplish together, and I look forward to seeing those results. diff --git a/locale/fa/blog/npm/2013-outage-postmortem.md b/locale/fa/blog/npm/2013-outage-postmortem.md new file mode 100644 index 0000000000000..01c2cb5238d4c --- /dev/null +++ b/locale/fa/blog/npm/2013-outage-postmortem.md @@ -0,0 +1,86 @@ +--- +date: 2013-11-26T15:14:59.000Z +author: Charlie Robbins +title: Keeping The npm Registry Awesome +slug: npm-post-mortem +category: npm +layout: blog-post.hbs +--- + +We know the availability and overall health of The npm Registry is paramount to everyone using Node.js as well as the larger JavaScript community and those of your using it for [some][browserify] [awesome][dotc] [projects][npm-rubygems] [and ideas][npm-python]. Between November 4th and November 15th 2013 The npm Registry had several hours of downtime over three distinct time periods: + +1. November 4th -- 16:30 to 15:00 UTC +2. November 13th -- 15:00 to 19:30 UTC +3. November 15th -- 15:30 to 18:00 UTC + +The root cause of these downtime was insufficient resources: both hardware and human. This is a full post-mortem where we will be look at how npmjs.org works, what went wrong, how we changed the previous architecture of The npm Registry to fix it, as well next steps we are taking to prevent this from happening again. + +All of the next steps require additional expenditure from Nodejitsu: both servers and labor. This is why along with this post-mortem we are announcing our [crowdfunding campaign: scalenpm.org](https://scalenpm.org)! Our goal is to raise enough funds so that Nodejitsu can continue to run The npm Registry as a free service for _you, the community._ + +Please take a minute now to donate at [https://scalenpm.org](https://scalenpm.org)! + +## How does npmjs.org work? + +There are two distinct components that make up npmjs.org operated by different people: + +* **http://registry.npmjs.org**: The main CouchApp (Github: [isaacs/npmjs.org](https://github.com/isaacs/npmjs.org)) that stores both package tarballs and metadata. It is operated by Nodejitsu since we [acquired IrisCouch in May](https://www.nodejitsu.com/company/press/2013/05/22/iriscouch/). The primary system administrator is [Jason Smith](https://github.com/jhs), the current CTO at Nodejitsu, cofounder of IrisCouch, and the System Administrator of registry.npmjs.org since 2011. +* **https://npmjs.com**: The npmjs website that you interact with using a web browser. It is a Node.js program (Github: [isaacs/npm-www](https://github.com/isaacs/npm-www)) maintained and operated by Isaac and running on a Joyent Public Cloud SmartMachine. + +Here is a high-level summary of the _old architecture:_ + +
+
+
+ 
+Photo by Luc Viatour (flickr)
Managing dependencies is a fundamental problem in building complex software. The terrific success of github and npm have made code reuse especially easy in the Node world, where packages don't exist in isolation but rather as nodes in a large graph. The software is constantly changing (releasing new versions), and each package has its own constraints about what other packages it requires to run (dependencies). npm keeps track of these constraints, and authors express what kind of changes are compatible using semantic versioning, allowing authors to specify that their package will work with even future versions of its dependencies as long as the semantic versions are assigned properly. + +
+This does mean that when you "npm install" a package with dependencies, there's no guarantee that you'll get the same set of code now that you would have gotten an hour ago, or that you would get if you were to run it again an hour later. You may get a bunch of bug fixes now that weren't available an hour ago. This is great during development, where you want to keep up with changes upstream. It's not necessarily what you want for deployment, though, where you want to validate whatever bits you're actually shipping. + +
+Put differently, it's understood that all software changes incur some risk, and it's critical to be able to manage this risk on your own terms. Taking that risk in development is good because by definition that's when you're incorporating and testing software changes. On the other hand, if you're shipping production software, you probably don't want to take this risk when cutting a release candidate (i.e. build time) or when you actually ship (i.e. deploy time) because you want to validate whatever you ship. + +
+You can address a simple case of this problem by only depending on specific versions of packages, allowing no semver flexibility at all, but this falls apart when you depend on packages that don't also adopt the same principle. Many of us at Joyent started wondering: can we generalize this approach? + +
+That brings us to npm shrinkwrap[1]: + +
+ +``` +NAME + npm-shrinkwrap -- Lock down dependency versions + +SYNOPSIS + npm shrinkwrap + +DESCRIPTION + This command locks down the versions of a package's dependencies so + that you can control exactly which versions of each dependency will + be used when your package is installed. +``` + +Let's consider package A: + +
+{
+ "name": "A",
+ "version": "0.1.0",
+ "dependencies": {
+ "B": "<0.1.0"
+ }
+}
+package B: + +
+{
+ "name": "B",
+ "version": "0.0.1",
+ "dependencies": {
+ "C": "<0.1.0"
+ }
+}
+and package C: + +
+{
+ "name": "C,
+ "version": "0.0.1"
+}
+If these are the only versions of A, B, and C available in the registry, then a normal "npm install A" will install: + +
+A@0.1.0
+└─┬ B@0.0.1
+ └── C@0.0.1
+Then if B@0.0.2 is published, then a fresh "npm install A" will install: + +
+A@0.1.0
+└─┬ B@0.0.2
+ └── C@0.0.1
+assuming the new version did not modify B's dependencies. Of course, the new version of B could include a new version of C and any number of new dependencies. As we said before, if A's author doesn't want that, she could specify a dependency on B@0.0.1. But if A's author and B's author are not the same person, there's no way for A's author to say that she does not want to pull in newly published versions of C when B hasn't changed at all. + +
+In this case, A's author can use + +
+# npm shrinkwrap
+This generates npm-shrinkwrap.json, which will look something like this: + +
+{
+ "name": "A",
+ "dependencies": {
+ "B": {
+ "version": "0.0.1",
+ "dependencies": {
+ "C": { "version": "0.1.0" }
+ }
+ }
+ }
+}
+The shrinkwrap command has locked down the dependencies based on what's currently installed in node_modules. When "npm install" installs a package with a npm-shrinkwrap.json file in the package root, the shrinkwrap file (rather than package.json files) completely drives the installation of that package and all of its dependencies (recursively). So now the author publishes A@0.1.0, and subsequent installs of this package will use B@0.0.1 and C@0.1.0, regardless the dependencies and versions listed in A's, B's, and C's package.json files. If the authors of B and C publish new versions, they won't be used to install A because the shrinkwrap refers to older versions. Even if you generate a new shrinkwrap, it will still reference the older versions, since "npm shrinkwrap" uses what's installed locally rather than what's available in the registry. + +
+Using a shrinkwrapped package is no different than using any other package: you can "npm install" it by hand, or add a dependency to your package.json file and "npm install" it. + +
+To shrinkwrap an existing package: + +
+To add or update a dependency in a shrinkwrapped package: + +
+You can still use npm outdated(1) to view which dependencies have newer versions available. + +
+For more details, check out the full docs on npm shrinkwrap, from which much of the above is taken. + +
+node_modules into git?One previously proposed solution is to "npm install" your dependencies during development and commit the results into source control. Then you deploy your app from a specific git SHA knowing you've got exactly the same bits that you tested in development. This does address the problem, but it has its own issues: for one, binaries are tricky because you need to "npm install" them to get their sources, but this builds the [system-dependent] binary too. You can avoid checking in the binaries and use "npm rebuild" at build time, but we've had a lot of difficulty trying to do this.[2] At best, this is second-class treatment for binary modules, which are critical for many important types of Node applications.[3] + +
+Besides the issues with binary modules, this approach just felt wrong to many of us. There's a reason we don't check binaries into source control, and it's not just because they're platform-dependent. (After all, we could build and check in binaries for all supported platforms and operating systems.) It's because that approach is error-prone and redundant: error-prone because it introduces a new human failure mode where someone checks in a source change but doesn't regenerate all the binaries, and redundant because the binaries can always be built from the sources alone. An important principle of software version control is that you don't check in files derived directly from other files by a simple transformation.[4] Instead, you check in the original sources and automate the transformations via the build process. + +
+Dependencies are just like binaries in this regard: they're files derived from a simple transformation of something else that is (or could easily be) already available: the name and version of the dependency. Checking them in has all the same problems as checking in binaries: people could update package.json without updating the checked-in module (or vice versa). Besides that, adding new dependencies has to be done by hand, introducing more opportunities for error (checking in the wrong files, not checking in certain files, inadvertently changing files, and so on). Our feeling was: why check in this whole dependency tree (and create a mess for binary add-ons) when we could just check in the package name and version and have the build process do the rest? + +
+Finally, the approach of checking in node_modules doesn't really scale for us. We've got at least a dozen repos that will use restify, and it doesn't make sense to check that in everywhere when we could instead just specify which version each one is using. There's another principle at work here, which is separation of concerns: each repo specifies what it needs, while the build process figures out where to get it. + +
+We're not suggesting deploying a shrinkwrapped package directly and running "npm install" to install from shrinkwrap in production. We already have a build process to deal with binary modules and other automateable tasks. That's where we do the "npm install". We tar up the result and distribute the tarball. Since we test each build before shipping, we won't deploy something we didn't test. + +
+It's still possible to pick up newly published versions of existing packages at build time. We assume force publish is not that common in the first place, let alone force publish that breaks compatibility. If you're worried about this, you can use git SHAs in the shrinkwrap or even consider maintaining a mirror of the part of the npm registry that you use and require human confirmation before mirroring unpublishes. + +
+Of course, the details of each use case matter a lot, and the world doesn't have to pick just one solution. If you like checking in node_modules, you should keep doing that. We've chosen the shrinkwrap route because that works better for us. + +
+It's not exactly news that Joyent is heavy on Node. Node is the heart of our SmartDataCenter (SDC) product, whose public-facing web portal, public API, Cloud Analytics, provisioning, billing, heartbeating, and other services are all implemented in Node. That's why it's so important to us to have robust components (like logging and REST) and tools for understanding production failures postmortem, profile Node apps in production, and now managing Node dependencies. Again, we're interested to hear feedback from others using these tools. + +
+[1] Much of this section is taken directly from the "npm shrinkwrap" documentation. + +
+[2] We've had a lot of trouble with checking in node_modules with binary dependencies. The first problem is figuring out exactly which files not to check in (.o, .node, .dynlib, .so, *.a, ...). When Mark went to apply this to one of our internal services, the "npm rebuild" step blew away half of the dependency tree because it ran "make clean", which in dependency ldapjs brings the repo to a clean slate by blowing away its dependencies. Later, a new (but highly experienced) engineer on our team was tasked with fixing a bug in our Node-based DHCP server. To fix the bug, we went with a new dependency. He tried checking in node_modules, which added 190,000 lines of code (to this repo that was previously a few hundred LOC). And despite doing everything he could think of to do this correctly and test it properly, the change broke the build because of the binary modules. So having tried this approach a few times now, it appears quite difficult to get right, and as I pointed out above, the lack of actual documentation and real world examples suggests others either aren't using binary modules (which we know isn't true) or haven't had much better luck with this approach. + +
+[3] Like a good Node-based distributed system, our architecture uses lots of small HTTP servers. Each of these serves a REST API using restify. restify uses the binary module node-dtrace-provider, which gives each of our services deep DTrace-based observability for free. So literally almost all of our components are or will soon be depending on a binary add-on. Additionally, the foundation of Cloud Analytics are a pair of binary modules that extract data from DTrace and kstat. So this isn't a corner case for us, and we don't believe we're exceptional in this regard. The popular hiredis package for interfacing with redis from Node is also a binary module. + +
+[4] Note that I said this is an important principle for software version control, not using git in general. People use git for lots of things where checking in binaries and other derived files is probably fine. Also, I'm not interested in proselytizing; if you want to do this for software version control too, go ahead. But don't do it out of ignorance of existing successful software engineering practices.
diff --git a/locale/fa/blog/npm/npm-1-0-global-vs-local-installation.md b/locale/fa/blog/npm/npm-1-0-global-vs-local-installation.md new file mode 100644 index 0000000000000..380eb5f486010 --- /dev/null +++ b/locale/fa/blog/npm/npm-1-0-global-vs-local-installation.md @@ -0,0 +1,67 @@ +--- +title: "npm 1.0: Global vs Local installation" +author: Isaac Schlueter +date: 2011-03-24T06:07:13.000Z +status: publish +category: npm +slug: npm-1-0-global-vs-local-installation +layout: blog-post.hbs +--- + +npm 1.0 is in release candidate mode. Go get it!
+ +More than anything else, the driving force behind the npm 1.0 rearchitecture was the desire to simplify what a package installation directory structure looks like.
+ +In npm 0.x, there was a command called bundle that a lot of people liked. bundle let you install your dependencies locally in your project, but even still, it was basically a hack that never really worked very reliably.
Also, there was that activation/deactivation thing. That’s confusing.
+ +In npm 1.0, there are two ways to install things:
+ +{prefix}/lib/node_modules, and puts executable files in {prefix}/bin, where {prefix} is usually something like /usr/local. It also installs man pages in {prefix}/share/man, if they’re supplied../node_modules, executables go in ./node_modules/.bin/, and man pages aren’t installed at all.Whether to install a package globally or locally depends on the global config, which is aliased to the -g command line switch.
Just like how global variables are kind of gross, but also necessary in some cases, global packages are important, but best avoided if not needed.
+ +In general, the rule of thumb is:
+ +require('whatever'), then install it locally, at the root of your project.PATH environment variable.Of course, there are some cases where you want to do both. Coffee-script and Express both are good examples of apps that have a command line interface, as well as a library. In those cases, you can do one of the following:
+ +npm link coffee-script or npm link express (if you’re on a platform that supports symbolic links.) Then you only need to update the global copy to update all the symlinks as well.The first option is the best in my opinion. Simple, clear, explicit. The second is really handy if you are going to re-use the same library in a bunch of different projects. (More on npm link in a future installment.)
You can probably think of other ways to do it by messing with environment variables. But I don’t recommend those ways. Go with the grain.
+ +Let’s say you do something like this:
+ +cd ~/projects/foo # go into my project
+npm install express # ./node_modules/express
+cd lib/utils # move around in there
+vim some-thing.js # edit some stuff, work work work
+npm install redis # ./lib/utils/node_modules/redis!? ew.
+
+In this case, npm will install redis into ~/projects/foo/node_modules/redis. Sort of like how git will work anywhere within a git repository, npm will work anywhere within a package, defined by having a node_modules folder.
If your package's scripts.test command uses a command-line program installed by one of your dependencies, not to worry. npm makes ./node_modules/.bin the first entry in the PATH environment variable when running any lifecycle scripts, so this will work fine, even if your program is not globally installed:
+
+
{ "name" : "my-program"
+, "version" : "1.2.3"
+, "dependencies": { "express": "*", "coffee-script": "*" }
+, "devDependencies": { "vows": "*" }
+, "scripts":
+ { "test": "vows test/*.js"
+ , "preinstall": "cake build" } }
diff --git a/locale/fa/blog/npm/npm-1-0-link.md b/locale/fa/blog/npm/npm-1-0-link.md
new file mode 100644
index 0000000000000..d8fd1304742f6
--- /dev/null
+++ b/locale/fa/blog/npm/npm-1-0-link.md
@@ -0,0 +1,117 @@
+---
+title: "npm 1.0: link"
+author: Isaac Schlueter
+date: 2011-04-07T00:40:33.000Z
+status: publish
+category: npm
+slug: npm-1-0-link
+layout: blog-post.hbs
+---
+
+npm 1.0 is in release candidate mode. Go get it!
+ +In npm 0.x, there was a command called link. With it, you could “link-install” a package so that changes would be reflected in real-time. This is especially handy when you’re actually building something. You could make a few changes, run the command again, and voila, your new code would be run without having to re-install every time.
Of course, compiled modules still have to be rebuilt. That’s not ideal, but it’s a problem that will take more powerful magic to solve.
+ +In npm 0.x, this was a pretty awful kludge. Back then, every package existed in some folder like:
+ +prefix/lib/node/.npm/my-package/1.3.6/package
+
+
+and the package’s version and name could be inferred from the path. Then, symbolic links were set up that looked like:
+ +prefix/lib/node/my-package@1.3.6 -> ./.npm/my-package/1.3.6/package
+
+
+It was easy enough to point that symlink to a different location. However, since the package.json file could change, that meant that the connection between the version and the folder was not reliable.
+ +At first, this was just sort of something that we dealt with by saying, “Relink if you change the version.” However, as more and more edge cases arose, eventually the solution was to give link packages this fakey version of “9999.0.0-LINK-hash” so that npm knew it was an impostor. Sometimes the package was treated as if it had the 9999.0.0 version, and other times it was treated as if it had the version specified in the package.json.
+ +For npm 1.0, we backed up and looked at what the actual use cases were. Most of the time when you link something you want one of the following:
+ +require() it.And, in both cases, changes should be immediately apparent and not require any re-linking.
+ +Also, there’s a third use case that I didn’t really appreciate until I started writing more programs that had more dependencies:
+ +Globally install something, and use it in development in a bunch of projects, and then update them all at once so that they all use the latest version.
Really, the second case above is a special-case of this third case.
+ +The first step is to link your local project into the global install space. (See global vs local installation for more on this global/local business.)
+ +I do this as I’m developing node projects (including npm itself).
+ +cd ~/dev/js/node-tap # go into the project dir
+npm link # create symlinks into {prefix}
+
+
+Because of how I have my computer set up, with /usr/local as my install prefix, I end up with a symlink from /usr/local/lib/node_modules/tap pointing to ~/dev/js/node-tap, and the executable linked to /usr/local/bin/tap.
Of course, if you set your paths differently, then you’ll have different results. (That’s why I tend to talk in terms of prefix rather than /usr/local.)
When you want to link the globally-installed package into your local development folder, you run npm link pkg where pkg is the name of the package that you want to install.
For example, let’s say that I wanted to write some tap tests for my node-glob package. I’d first do the steps above to link tap into the global install space, and then I’d do this:
+ +cd ~/dev/js/node-glob # go to the project that uses the thing.
+npm link tap # link the global thing into my project.
+
+
+Now when I make changes in ~/dev/js/node-tap, they’ll be immediately reflected in ~/dev/js/node-glob/node_modules/tap.
Let’s say I have 15 sites that all use express. I want the benefits of local development, but I also want to be able to update all my dev folders at once. You can globally install express, and then link it into your local development folder.
+ +npm install express -g # install express globally
+cd ~/dev/js/my-blog # development folder one
+npm link express # link the global express into ./node_modules
+cd ~/dev/js/photo-site # other project folder
+npm link express # link express into here, as well
+
+ # time passes
+ # TJ releases some new stuff.
+ # you want this new stuff.
+
+npm update express -g # update the global install.
+ # this also updates my project folders.
+
+
+npm link is a development tool. It’s awesome for managing packages on your local development box. But deploying with npm link is basically asking for problems, since it makes it super easy to update things without realizing it.
+ +I highly doubt that a native Windows node will ever have comparable symbolic link support to what Unix systems provide. I know that there are junctions and such, and I've heard legends about symbolic links on Windows 7.
+ +When there is a native windows port of Node, if that native windows port has `fs.symlink` and `fs.readlink` support that is exactly identical to the way that they work on Unix, then this should work fine.
+ +But I wouldn't hold my breath. Any bugs about this not working on a native Windows system (ie, not Cygwin) will most likely be closed with wontfix.
Back before the Great Package Management Wars of Node 0.1, before npm or kiwi or mode or seed.js could do much of anything, and certainly before any of them had more than 2 users, Mikeal Rogers invited me to the Couch.io offices for lunch to talk about this npm registry thingie I’d mentioned wanting to build. (That is, to convince me to use CouchDB for it.)
+ +Since he was volunteering to build the first version of it, and since couch is pretty much the ideal candidate for this use-case, it was an easy sell.
+ +While I was there, he said, “Look. You need to be able to link a project directory as if it was installed as a package, and then have it all Just Work. Can you do that?”
+ +I was like, “Well, I don’t know… I mean, there’s these edge cases, and it doesn’t really fit with the existing folder structure very well…”
+ +“Dude. Either you do it, or I’m going to have to do it, and then there’ll be another package manager in node, instead of writing a registry for npm, and it won’t be as good anyway. Don’t be python.”
+ +The rest is history.
diff --git a/locale/fa/blog/npm/npm-1-0-released.md b/locale/fa/blog/npm/npm-1-0-released.md new file mode 100644 index 0000000000000..abc105708d448 --- /dev/null +++ b/locale/fa/blog/npm/npm-1-0-released.md @@ -0,0 +1,39 @@ +--- +title: "npm 1.0: Released" +author: Isaac Schlueter +date: 2011-05-01T15:09:45.000Z +status: publish +category: npm +slug: npm-1-0-released +layout: blog-post.hbs +--- + +npm 1.0 has been released. Here are the highlights:
+ +The focus is on npm being a development tool, rather than an apt-wannabe.
+ +To get the new version, run this command:
+ +curl https://npmjs.com/install.sh | sh
+
+This will prompt to ask you if it’s ok to remove all the old 0.x cruft. If you want to not be asked, then do this:
+ +curl https://npmjs.com/install.sh | clean=yes sh
+
+Or, if you want to not do the cleanup, and leave the old stuff behind, then do this:
+ +curl https://npmjs.com/install.sh | clean=no sh
+
+A lot of people in the node community were brave testers and helped make this release a lot better (and swifter) than it would have otherwise been. Thanks :)
+ +npm will not have any major feature enhancements or architectural changes for at least 6 months. There are interesting developments planned that leverage npm in some ways, but it’s time to let the client itself settle. Also, I want to focus attention on some other problems for a little while.
+ +Of course, bug reports are always welcome.
+ +See you at NodeConf!
diff --git a/locale/fa/blog/npm/npm-1-0-the-new-ls.md b/locale/fa/blog/npm/npm-1-0-the-new-ls.md new file mode 100644 index 0000000000000..b2b72067e91fa --- /dev/null +++ b/locale/fa/blog/npm/npm-1-0-the-new-ls.md @@ -0,0 +1,147 @@ +--- +title: "npm 1.0: The New 'ls'" +author: Isaac Schlueter +date: 2011-03-18T06:22:17.000Z +status: publish +category: npm +slug: npm-1-0-the-new-ls +layout: blog-post.hbs +--- + +This is the first in a series of hopefully more than 1 posts, each detailing some aspect of npm 1.0.
+ +In npm 0.x, the ls command was a combination of both searching the registry as well as reporting on what you have installed.
As the registry has grown in size, this has gotten unwieldy. Also, since npm 1.0 manages dependencies differently, nesting them in node_modules folder and installing locally by default, there are different things that you want to view.
The functionality of the ls command was split into two different parts. search is now the way to find things on the registry (and it only reports one line per package, instead of one line per version), and ls shows a tree view of the packages that are installed locally.
Here’s an example of the output:
+ +$ npm ls
+npm@1.0.0 /Users/isaacs/dev-src/js/npm
+├── semver@1.0.1
+├─┬ ronn@0.3.5
+│ └── opts@1.2.1
+└─┬ express@2.0.0rc3 extraneous
+ ├─┬ connect@1.1.0
+ │ ├── qs@0.0.7
+ │ └── mime@1.2.1
+ ├── mime@1.2.1
+ └── qs@0.0.7
+
+
+This is after I’ve done npm install semver ronn express in the npm source directory. Since express isn’t actually a dependency of npm, it shows up with that “extraneous” marker.
Let’s see what happens when we create a broken situation:
+ +$ rm -rf ./node_modules/express/node_modules/connect
+$ npm ls
+npm@1.0.0 /Users/isaacs/dev-src/js/npm
+├── semver@1.0.1
+├─┬ ronn@0.3.5
+│ └── opts@1.2.1
+└─┬ express@2.0.0rc3 extraneous
+ ├── UNMET DEPENDENCY connect >= 1.1.0 < 2.0.0
+ ├── mime@1.2.1
+ └── qs@0.0.7
+
+
+Tree views are great for human readability, but some times you want to pipe that stuff to another program. For that output, I took the same datastructure, but instead of building up a treeview string for each line, it spits out just the folders like this:
+ +$ npm ls -p
+/Users/isaacs/dev-src/js/npm
+/Users/isaacs/dev-src/js/npm/node_modules/semver
+/Users/isaacs/dev-src/js/npm/node_modules/ronn
+/Users/isaacs/dev-src/js/npm/node_modules/ronn/node_modules/opts
+/Users/isaacs/dev-src/js/npm/node_modules/express
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect/node_modules/qs
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect/node_modules/mime
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/mime
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/qs
+
+
+Since you sometimes want a bigger view, I added the --long option to (shorthand: -l) to spit out more info:
$ npm ls -l
+npm@1.0.0
+│ /Users/isaacs/dev-src/js/npm
+│ A package manager for node
+│ git://github.com/isaacs/npm.git
+│ https://npmjs.com/
+├── semver@1.0.1
+│ ./node_modules/semver
+│ The semantic version parser used by npm.
+│ git://github.com/isaacs/node-semver.git
+├─┬ ronn@0.3.5
+│ │ ./node_modules/ronn
+│ │ markdown to roff and html converter
+│ └── opts@1.2.1
+│ ./node_modules/ronn/node_modules/opts
+│ Command line argument parser written in the style of commonjs. To be used with node.js
+└─┬ express@2.0.0rc3 extraneous
+ │ ./node_modules/express
+ │ Sinatra inspired web development framework
+ ├─┬ connect@1.1.0
+ │ │ ./node_modules/express/node_modules/connect
+ │ │ High performance middleware framework
+ │ │ git://github.com/senchalabs/connect.git
+ │ ├── qs@0.0.7
+ │ │ ./node_modules/express/node_modules/connect/node_modules/qs
+ │ │ querystring parser
+ │ └── mime@1.2.1
+ │ ./node_modules/express/node_modules/connect/node_modules/mime
+ │ A comprehensive library for mime-type mapping
+ ├── mime@1.2.1
+ │ ./node_modules/express/node_modules/mime
+ │ A comprehensive library for mime-type mapping
+ └── qs@0.0.7
+ ./node_modules/express/node_modules/qs
+ querystring parser
+
+$ npm ls -lp
+/Users/isaacs/dev-src/js/npm:npm@1.0.0::::
+/Users/isaacs/dev-src/js/npm/node_modules/semver:semver@1.0.1::::
+/Users/isaacs/dev-src/js/npm/node_modules/ronn:ronn@0.3.5::::
+/Users/isaacs/dev-src/js/npm/node_modules/ronn/node_modules/opts:opts@1.2.1::::
+/Users/isaacs/dev-src/js/npm/node_modules/express:express@2.0.0rc3:EXTRANEOUS:::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect:connect@1.1.0::::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect/node_modules/qs:qs@0.0.7::::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect/node_modules/mime:mime@1.2.1::::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/mime:mime@1.2.1::::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/qs:qs@0.0.7::::
+
+
+And, if you want to get at the globally-installed modules, you can use ls with the global flag:
+ +$ npm ls -g
+/usr/local
+├─┬ A@1.2.3 -> /Users/isaacs/dev-src/js/A
+│ ├── B@1.2.3 -> /Users/isaacs/dev-src/js/B
+│ └─┬ npm@0.3.15
+│ └── semver@1.0.1
+├─┬ B@1.2.3 -> /Users/isaacs/dev-src/js/B
+│ └── A@1.2.3 -> /Users/isaacs/dev-src/js/A
+├── glob@2.0.5
+├─┬ npm@1.0.0 -> /Users/isaacs/dev-src/js/npm
+│ ├── semver@1.0.1
+│ └─┬ ronn@0.3.5
+│ └── opts@1.2.1
+└── supervisor@0.1.2 -> /Users/isaacs/dev-src/js/node-supervisor
+
+$ npm ls -gpl
+/usr/local:::::
+/usr/local/lib/node_modules/A:A@1.2.3::::/Users/isaacs/dev-src/js/A
+/usr/local/lib/node_modules/A/node_modules/npm:npm@0.3.15::::/Users/isaacs/dev-src/js/A/node_modules/npm
+/usr/local/lib/node_modules/A/node_modules/npm/node_modules/semver:semver@1.0.1::::/Users/isaacs/dev-src/js/A/node_modules/npm/node_modules/semver
+/usr/local/lib/node_modules/B:B@1.2.3::::/Users/isaacs/dev-src/js/B
+/usr/local/lib/node_modules/glob:glob@2.0.5::::
+/usr/local/lib/node_modules/npm:npm@1.0.0::::/Users/isaacs/dev-src/js/npm
+/usr/local/lib/node_modules/npm/node_modules/semver:semver@1.0.1::::/Users/isaacs/dev-src/js/npm/node_modules/semver
+/usr/local/lib/node_modules/npm/node_modules/ronn:ronn@0.3.5::::/Users/isaacs/dev-src/js/npm/node_modules/ronn
+/usr/local/lib/node_modules/npm/node_modules/ronn/node_modules/opts:opts@1.2.1::::/Users/isaacs/dev-src/js/npm/node_modules/ronn/node_modules/opts
+/usr/local/lib/node_modules/supervisor:supervisor@0.1.2::::/Users/isaacs/dev-src/js/node-supervisor
+
+
+Those -> flags are indications that the package is link-installed, which will be covered in the next installment.