From afd8e3ee6bc6ab903279585abebef8d6d646327f Mon Sep 17 00:00:00 2001 From: shimks Date: Thu, 15 Mar 2018 13:08:11 -0400 Subject: [PATCH 1/2] docs: migrate missing pages from loopback.io --- docs/site/Booting-an-Application.md | 238 +++++++++++ docs/site/FAQ.md | 2 + docs/site/Testing-your-application.md | 559 ++++++++++++++++++-------- 3 files changed, 635 insertions(+), 164 deletions(-) create mode 100644 docs/site/Booting-an-Application.md diff --git a/docs/site/Booting-an-Application.md b/docs/site/Booting-an-Application.md new file mode 100644 index 000000000000..b7120668020d --- /dev/null +++ b/docs/site/Booting-an-Application.md @@ -0,0 +1,238 @@ +--- +lang: en +title: 'Booting an Application' +keywords: LoopBack 4.0, LoopBack 4 +tags: +sidebar: lb4_sidebar +permalink: /doc/en/lb4/Booting-an-Application.html +summary: +--- + +## What does Booting an Application mean? + +A typical LoopBack application is made up of many artifacts in different files, +organized in different folders. **Booting an Application** means: + +* Discovering artifacts automatically based on a convention (a specific folder + containing files with a given suffix) +* Processing those artifacts (this usually means automatically binding them to the Application's Context) + +`@loopback/boot` provides a Bootstrapper that uses Booters to automatically +discover and bind artifacts, all packaged in an easy-to-use Mixin. + +### What is an artifact? + +An artifact is any LoopBack construct usually defined in code as a Class. LoopBack +constructs include Controllers, Repositories, Models, etc. + +## Usage + +### @loopback/cli + +New projects generated using `@loopback/cli` or `lb4` are automatically enabled +to use `@loopback/boot` for booting the Application using the conventions +followed by the CLI. + +### Adding to existing project + +See [Using the BootMixin](#using-the-bootmixin) to add Boot to your Project manually. + +--- + +The rest of this page describes the inner workings of `@loopback/boot` for advanced use +cases, manual usage or using `@loopback/boot` as a standalone package (with custom +booters). + +## BootMixin + +Boot functionality can be added to a LoopBack 4 Application by mixing it with the +`BootMixin`. The Mixin adds the `BootComponent` to your Application as well as +convenience methods such as `app.boot()` and `app.booters()`. The Mixin also allows +Components to set the property `booters` as an Array of `Booters`. They will be bound +to the Application and called by the `Bootstrapper`. + +Since this is a convention-based Bootstrapper, it is important to set a `projectRoot`, +as all other artifact paths will be resolved relative to this path. + +_Tip_: `application.ts` will likely be at the root of your project, so its path can be +used to set the `projectRoot` by using the `__dirname` variable. _(See example below)_ + +### Using the BootMixin + +`Booter` and `Binding` types must be imported alongside `BootMixin` to allow TypeScript +to infer types and avoid errors. _If using `tslint` with the `no-unused-variable` rule, +you can disable it for the import line by adding `// tslint:disable-next-line:no-unused-variable` +above the import statement_. + +```ts +import {BootMixin, Booter, Binding} from "@loopback/boot"; + +class MyApplication extends BootMixin(Application) { + constructor(options?: ApplicationConfig) { + super(options); + // Setting the projectRoot + this.projectRoot = __dirname; + // Set project conventions + this.bootOptions: BootOptions = { + controllers: { + dirs: ['controllers'], + extensions: ['.controller.js'], + nested: true, + } + } + } +} +``` + +Now just call `app.boot()` from `index.ts` before starting your Application using `app.start()`. + +#### app.boot() + +A convenience method to retrieve the `Bootstrapper` instance bound to the +Application and calls its `boot` function. This should be called before an +Application's `start()` method is called. _This is an `async` function and should +be called with `await`._ + +```ts +class MyApp extends BootMixin(Application) {} + +async main() { + const app = new MyApp(); + app.projectRoot = __dirname; + await app.boot(); + await app.start(); +} +``` + +#### app.booters() + +A convenience method to manually bind `Booters`. You can pass any number of `Booter` +classes to this method and they will all be bound to the Application using the +prefix (`booters.`) and tag (`booter`) used by the `Bootstrapper`. + +```ts +// Binds MyCustomBooter to `booters.MyCustomBooter` +// Binds AnotherCustomBooter to `booters.AnotherCustomBooter` +// Both will have the `booter` tag set. +app.booters(MyCustomBooter, AnotherCustomBooter); +``` + +## BootComponent + +This component is added to an Application by `BootMixin` if used. This Component: + +* Provides a list of default `booters` as a property of the component +* Binds the conventional Bootstrapper to the Application + +_If using this as a standalone component without the `BootMixin`, you will need to +bind the `booters` of a component manually._ + +```ts +app.component(BootComponent); +``` + +## Bootstrapper + +A Class that acts as the "manager" for Booters. The Bootstrapper is designed to be +bound to an Application as a `SINGLETON`. The Bootstrapper class provides a `boot()` +method. This method is responsible for getting all bound `Booters` and running +their `phases`. A `phase` is a method on a `Booter` class. + +Each `boot()` method call creates a new `Context` that sets the `app` context +as its parent. This is done so each `Context` for `boot` gets a new instance of +`booters` but the same context can be passed into `boot` so selective `phases` can be +run in different calls of `boot`. + +The Bootstrapper can be configured to run specific booters or boot phases +by passing in `BootExecOptions`. **This is experimental and subject to change. Hence, +this functionality is not exposed when calling `boot()` via `BootMixin`**. + +To use `BootExecOptions`, you must directly call `bootstrapper.boot()` instead of `app.boot()`. +You can pass in the `BootExecOptions` object with the following properties: + +| Property | Type | Description | +| ---------------- | ----------------------- | ------------------------------------------------ | +| `booters` | `Constructor[]` | Array of Booters to bind before running `boot()` | +| `filter.booters` | `string[]` | Names of Booter classes that should be run | +| `filter.phases` | `string[]` | Names of Booter phases to run | + +### Example + +```ts +import { BootMixin, Booter, Binding, Bootstrapper } from "@loopback/boot"; + +class MyApp extends BootMixin(Application) {} +const app = new MyApp(); +app.projectRoot = __dirname; + +const bootstrapper: Bootstrapper = await this.get( + BootBindings.BOOTSTRAPPER_KEY +); +bootstrapper.boot({ + booters: [MyCustomBooter], + filter: { + booters: ["MyCustomBooter"], + phases: ["configure", "discover"] // Skip the `load` phase. + } +}); +``` + +## Booters + +A Booter is a class that is responsible for booting an artifact. A Booter does its +work in `phases` which are called by the Bootstrapper. The following Booters are +a part of the `@loopback/boot` package and loaded automatically via `BootMixin`. + +### Controller Booter + +This Booter's purpose is to discover [Controller](Controllers.html) type Artifacts and to bind +them to the Application's Context. + +You can configure the conventions used in your +project for a Controller by passing a `controllers` object on `BootOptions` property +of your Application. The `controllers` object supports the following options: + +| Options | Type | Default | Description | +| ------------ | -------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------- | +| `dirs` | `string \| string[]` | `['controllers']` | Paths relative to projectRoot to look in for Controller artifacts | +| `extensions` | `string \| string[]` | `['.controller.js']` | File extensions to match for Controller artifacts | +| `nested` | `boolean` | `true` | Look in nested directories in `dirs` for Controller artifacts | +| `glob` | `string` | | A `glob` pattern string. This takes precendence over above 3 options (which are used to make a glob pattern). | + +### Repository Booter + +This Booter's purpose is to discover [Repository](Repository.html) type Artifacts and to bind +them to the Application's Context. The use of this Booter requires `RepositoryMixin` +from `@loopback/repository` to be mixed into your Application class. + +You can configure the conventions used in your +project for a Repository by passing a `repositories` object on `BootOptions` property +of your Application. The `repositories` object supports the following options: + +| Options | Type | Default | Description | +| ------------ | -------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------- | +| `dirs` | `string \| string[]` | `['repositories']` | Paths relative to projectRoot to look in for Repository artifacts | +| `extensions` | `string \| string[]` | `['.repository.js']` | File extensions to match for Repository artifacts | +| `nested` | `boolean` | `true` | Look in nested directories in `dirs` for Repository artifacts | +| `glob` | `string` | | A `glob` pattern string. This takes precendence over above 3 options (which are used to make a glob pattern). | + +### Custom Booters + +A custom Booter can be written as a Class that implements the `Booter` interface. The Class +must implement methods that corresponds to a `phase` name. The `phases` are called +by the Bootstrapper in a pre-determined order (unless overridden by `BootExecOptions`). +The next phase is only called once the previous phase has been completed for all Booters. + +#### Phases + +**configure** + +Used to configure the `Booter` with its default options. + +**discover** + +Used to discover the artifacts supported by the `Booter` based on convention. + +**load** + +Used to bind the discovered artifacts to the Application. diff --git a/docs/site/FAQ.md b/docs/site/FAQ.md index 82887c56e119..8e18c545d559 100644 --- a/docs/site/FAQ.md +++ b/docs/site/FAQ.md @@ -16,6 +16,8 @@ summary: LoopBack 4 is a completely new framework, sometimes referred to as Loop - Suitable for small and large teams - Minimally opinionated, enforce your team's opinions instead +See [Crafting LoopBack 4](Crafting-LoopBack-4.md) for more details. + ### What’s the timeline for LoopBack 4? See [Upcoming releases](https://github.com/strongloop/loopback-next/wiki/Upcoming-Releases). diff --git a/docs/site/Testing-your-application.md b/docs/site/Testing-your-application.md index 1bb198d1bf4d..e93f179d48eb 100644 --- a/docs/site/Testing-your-application.md +++ b/docs/site/Testing-your-application.md @@ -11,30 +11,43 @@ summary: ## Overview A thorough automated test suite is important because it: + - Ensures your application works as expected. - Prevents regressions when new features are added and bugs are fixed. -- Helps new and existing developers understand different parts of the codebase (knowledge sharing). -- Speeds up development over the long run (the code writes itself!) +- Helps new and existing developers understand different parts of the codebase + (knowledge sharing). +- Speeds up development over the long run (the code writes itself!). ### Types of tests -We encourage writing tests from a few perspectives, mainly [black-box testing](https://en.wikipedia.org/wiki/Black-box_testing) (acceptance) and [white-box testing](https://en.wikipedia.org/wiki/White-box_testing) (integration and unit). Tests are usually written using typical patterns such as [`arrange/act/assert`](https://msdn.microsoft.com/en-us/library/hh694602.aspx#Anchor_3) or [`given/when/then`](https://martinfowler.com/bliki/GivenWhenThen.html). While both styles work well, just pick one that you're comfortable with and start writing tests! +We encourage writing tests from a few perspectives, mainly [black-box testing](https://en.wikipedia.org/wiki/Black-box_testing) +(acceptance) and [white-box testing](https://en.wikipedia.org/wiki/White-box_testing) +(integration and unit). Tests are usually written using typical patterns such as +[`arrange/act/assert`](https://msdn.microsoft.com/en-us/library/hh694602.aspx#Anchor_3) +or [`given/when/then`](https://martinfowler.com/bliki/GivenWhenThen.html). +Both styles work well, so pick one that you're comfortable with and +start writing tests! -For an introduction to automated testing, see [Define your testing strategy](Thinking-in-LoopBack.md#define-your-testing-strategy); for a step-by-step tutorial, see [Incrementally implement features](Thinking-in-LoopBack.md#incrementally-implement-features). +For an introduction to automated testing, see [Define your testing strategy](Defining-your-testing-strategy.html). +For a step-by-step tutorial, see [Incrementally implement features](Implementing-features.html). -{% include important.html content="A great test suite requires you to think smaller and favor fast, focused unit tests over slow application-wide end-to-end tests +{% include important.html content=" +A great test suite requires you to think smaller and favor fast and focused +unit tests over slow end-to-end tests. " %} This article is a reference guide for common types of tests and test helpers. ## Project setup -An automated test suite requires a test runner to execute all the tests and produce a summary report. We use and recommend [Mocha](https://mochajs.org). +An automated test suite requires a test runner to execute all the tests and +produce a summary report. We use and recommend [Mocha](https://mochajs.org). -In addition to a test runner, the test suites generally requires: +In addition to a test runner, the test suites generally require: - An assertion library (we recommend [Should.js](https://shouldjs.github.io)). -- A Library for making HTTP calls and verifying their results (we recommend [supertest](https://github.com/visionmedia/supertest)). +- A library for making HTTP calls and verifying their results (we recommend + [supertest](https://github.com/visionmedia/supertest)). - A library for creating test doubles (we recommend [Sinon.JS](http://sinonjs.org/)). The [@loopback/testlab](https://www.npmjs.com/package/@loopback/testlab) module @@ -42,19 +55,15 @@ integrates these packages and makes them easy to use together with LoopBack. ### Set up testing infrastructure with LoopBack CLI -{% include note.html content="The LoopBack CLI does not yet support LoopBack 4, -so using the CLI is not an option with the beta release. -" %} - - +LoopBack applications that have been generated using the `lb4 app` command from +`@loopback/cli` come with `@loopback/testlab` and `mocha` as a default, +so no other testing infrastructure setup is needed. ### Setup testing infrastructure manually If you have an existing application install `mocha` and `@loopback/testlab`: -``` +```shell npm install --save-dev mocha @loopback/testlab ``` @@ -65,10 +74,11 @@ Your `package.json` should then look something like this: // ... "devDependencies": { "@loopback/testlab": "^", + "@types/mocha": "^", "mocha": "^" }, "scripts": { - "test": "mocha" + "test": "mocha --recursive \"dist/test\"" } // ... } @@ -76,24 +86,47 @@ Your `package.json` should then look something like this: ## Data handling -Tests accessing a real database often require existing data. For example, a method listing all products needs some products in the database; a method to create a new product instance must determine which properties are required and any restrictions on their values. There are various approaches to address this issue. Many of them unfortunately make the test suite difficult to understand, difficult to maintain, and prone to test failures unrelated to the changes made. +Tests accessing a real database often require existing data. For example, +a method listing all products needs some products in the database; a method +to create a new product instance must determine which properties are required +and any restrictions on their values. There are various approaches to address +this issue. Many of them unfortunately make the test suite difficult +to understand, difficult to maintain, and prone to test failures unrelated +to the changes made. -Based on our experience, we recommend the following approach. +Our approach to data handling, based on our experience, is described in this +section. ### Clean the database before each test -Always start with a clean database before each test. This may seem counter-intuitive: why not reset the database after the test has finished? When a test fails and the database is cleaned after the test has finished, then it's difficult to observe what was stored in the database and why the test failed. When the database is cleaned in the beginning, then any failing test will leave the database in the state that caused the test to fail. +Start with a clean database before each test. This may seem +counter-intuitive: why not reset the database after the test has finished? +When a test fails and the database is cleaned after the test has finished, +then it's difficult to observe what was stored in the database and why the test +failed. When the database is cleaned in the beginning, then any failing test +will leave the database in the state that caused the test to fail. -To clean the database before each test, set up a `beforeEach` hook to call a helper method; for example: +To clean the database before each test, set up a `beforeEach` hook to call +a helper method; for example: {% include code-caption.html content="test/helpers/database.helpers.ts" %} + ```ts +import {ProductRepository, CategoryRepository} from '../../src/repositories'; +import {testdb} from '../fixtures/datasources/testdb.datasource'; + export async function givenEmptyDatabase() { - await new ProductRepository().deleteAll(); - await new CategoryRepository().deleteAll(); + await new ProductRepository(testdb).deleteAll(); + await new CategoryRepository(testdb).deleteAll(); } +``` + +{% include code-caption.html content="test/integration/controllers/product.controller.test.ts" %} +```ts // in your test file +import {givenEmptyDatabase} from '../../helpers/database.helpers'; + describe('ProductController (integration)', () => { before(givenEmptyDatabase); // etc. @@ -102,46 +135,95 @@ describe('ProductController (integration)', () => { ### Use test data builders -To avoid duplicating code for creating model data with all required properties filled in, use shared [test data builders](http://www.natpryce.com/articles/000714.html) instead. This enables tests to provide a small subset of properties that are strictly required by the tested scenario, which is important because it makes tests: +To avoid duplicating code for creating model data that is complete with required +properties, use shared [test data builders](http://www.natpryce.com/articles/000714.html). +This enables tests to provide the small subset of properties that is strictly +required by the tested scenario. Using shared test builders will help your tests +to be: -- Easier to understand, since it's immediately clear what model properties are relevant to the test. If the test were setting all required properties, it would be difficult to tell whether some of those properties are actually relevant to the tested scenario. +- Easier to understand, since it's immediately clear what model properties are + relevant to the tests. If the tests set the required properties, + it is difficult to tell whether the properties are actually + relevant to the tested scenario. -- Easier to maintain. As your data model evolves, you eventually need to add more required properties. If the tests were building model instance data manually, you would have to fix all tests to set the new required property. With a shared helper, there is only a single place where to add a value for the new required property. +- Easier to maintain. As your data model evolves, you will need to add + more required properties. If the tests build the model instance data manually, + all the tests must be manually updated to set a new required property. + With a shared test data builder, you update a single location with the new + property. -See [@loopback/openapi-spec-builder](https://www.npmjs.com/package/@loopback/openapi-spec-builder) for an example of how to apply this design pattern for building OpenAPI Spec documents. +See [@loopback/openapi-spec-builder](https://www.npmjs.com/package/@loopback/openapi-spec-builder) +for an example of how to apply this design pattern for building OpenAPI Spec +documents. -In practice, a rich method-based API is overkill and a simple function that adds missing required properties is sufficient. +In practice, a simple function that adds missing required properties is +sufficient. + +{% include code-caption.html content="test/helpers/database.helpers.ts" %} ```ts -export function givenProductData(data: Partial) { - return Object.assign({ - name: 'a-product-name', - slug: 'a-product-slug', - price: 1, - description: 'a-product-description', - available: true, - }, data); +// ... +export function givenProductData(data?: Partial) { + return Object.assign( + { + name: 'a-product-name', + slug: 'a-product-slug', + price: 1, + description: 'a-product-description', + available: true, + }, + data, + ); } -export async function givenProduct(data: Partial) { - return await new ProductRepository().create( - givenProductData(data)); +export async function givenProduct(data?: Partial) { + return await new ProductRepository(testdb).create(givenProductData(data)); } +// ... ``` ### Avoid sharing the same data for multiple tests -It's tempting to define a small set of data that's shared by all tests. For example, in an e-commerce application, you might pre-populate the database with few categories, some products, an admin user and a customer. Such approach has several downsides: - -- When trying to understand any individual test, it's difficult to tell what part of the pre-populated data is essential for the test and what's irrelevant. For example, in a test checking the method counting the number of products in a category using a pre-populated category "Stationery", is it important that "Stationery" contains nested sub-categories or is that fact irrelevant? If it's irrelevant, then what are the other tests that depend on it? - -- As the application grows and new features are added, it's easier to add more properties to existing model instances rather than create new instances using only properties required by the new features. For example, when adding a category image, it's easier to add image to an existing category "Stationery" and perhaps keep another category "Groceries" without any image, rather than create two new categories "CategoryWithAnImage" and "CategoryMissingImage". This further amplifies the previous problem, because it's not clear that "Groceries" is the category that should be used by tests requiring a category with no image - the category name does not provide any hints on that. - -- As the shared dataset grows (together with the application), the time required to bring the database into initial state grows too. Instead of running a few "DELETE ALL" queries before each test (which is relatively fast), you can end up with running tens to hundreds different commands creating different model instances, triggering slow index rebuilds along the way, and considerably slowing the test suite. - -Use the test data builders described in the previous section to populate your database with the data specific to your test only. - -Using the e-commerce example described above, this is how integration tests for the `CategoryRepository` might look: +It's tempting to define a small set of data to be shared by all tests. +For example, in an e-commerce application, you might pre-populate the database +with a few categories, some products, an admin user and a customer. +This approach has several downsides: + +- When trying to understand any individual test, it's difficult to tell what + part of the pre-populated data is essential for the test and what's + irrelevant. For example, in a test checking the method counting the number of + products in a category using a pre-populated category "Stationery", + is it important that "Stationery" contains nested sub-categories or is that + fact irrelevant? If it's irrelevant, then what are the other tests that + depend on it? + +- As the application grows and new features are added, it's easier to add more + properties to existing model instances rather than create new instances using + only the properties required by the new features. For example, when adding + a category image, it's easier to add image to an existing category + "Stationery" and perhaps keep another category "Groceries" without any image, + rather than creating two new categories "CategoryWithAnImage" and + "CategoryMissingImage". This further amplifies the previous problem, + because it's not clear that "Groceries" is the category that should be used + by tests requiring a category with no image - the category name does not + provide any hints on that. + +- As the shared dataset grows (together with the application), the time required + to bring the database into its initial state grows too. Instead of running a + few "DELETE ALL" queries before each test (which is relatively fast), + you may have to run tens or hundreds of different commands used to create + different model instances, thus triggering slow index rebuilds along the way + and slowing down the test suite considerably. + +Use the test data builders described in the previous section to populate your +database with the data specific to your test only. + + + + -Write higher-level helpers to share the code for re-creating common scenarios. For example, if your application has two kinds of users (admins and customers), then you may write the following helpers to simplify writing acceptance tests checking access control: +Write higher-level helpers to share the code for re-creating common scenarios. +For example, if your application has two kinds of users (admins and customers), +then you may write the following helpers to simplify writing acceptance tests +checking access control: ```ts async function givenAdminAndCustomer() { @@ -186,32 +271,71 @@ async function givenAdminAndCustomer() { ## Unit testing -Unit tests are considered "white-box" tests because they use an "inside-out" approach where the tests know about the internals and controls all the variables of the system being tested. Individual units are tested in isolation, their dependencies are replaced with [Test doubles](https://en.wikipedia.org/wiki/Test_double). +Unit tests are considered "white-box" tests because they use an "inside-out" +approach where the tests know about the internals and control all the variables +of the system being tested. Individual units are tested in isolation and their +dependencies are replaced with [Test doubles](https://en.wikipedia.org/wiki/Test_double). ### Use test doubles -Test doubles are functions or objects that look and behave like the real variants used in production, but are actually simplified versions giving the test more control of the behavior. For example, reproducing the situation where reading from a file failed because of a hard-drive error is pretty much impossible, unless we are using a test double that's simulating file-system API and giving us control of how what each call returns. +Test doubles are functions or objects that look and behave like the real +variants used in production, but are actually simplified versions that give the +test more control of the behavior. For example, reproducing the situation where +reading from a file failed because of a hard-drive error is pretty much +impossible. However, using a test double to simulate the file-system API +will provide control over what each call returns. -[Sinon.JS](http://sinonjs.org/) has become the de-facto standard for test doubles in Node.js and JavaScript/TypeScript in general. The `@loopback/testlab` package comes with Sinon preconfigured with TypeScript type definitions and integrated with Should.js assertions. +[Sinon.JS](http://sinonjs.org/) has become the de-facto standard for +test doubles in Node.js and JavaScript/TypeScript in general. +The `@loopback/testlab` package comes with Sinon preconfigured with TypeScript +type definitions and integrated with Should.js assertions. There are three kinds of test doubles provided by Sinon.JS: -- [Test spies](http://sinonjs.org/releases/v4.0.1/spies/) are functions that record arguments, the return value, the value of `this`, and exceptions thrown (if any) for all its calls. There are two types of spies: Some are anonymous functions, while others wrap methods that already exist in the system under test. - -- [Test stubs](http://sinonjs.org/releases/v4.0.1/stubs/) are functions (spies) with pre-programmed behavior. As spies, stubs can be either anonymous, or wrap existing functions. When wrapping an existing function with a stub, the original function is not called. - -- [Test mocks](http://sinonjs.org/releases/v4.0.1/mocks/) (and mock expectations) are fake methods (like spies) with pre-programmed behavior (like stubs) as well as pre-programmed expectations. A mock will fail your test if it is not used as expected. - -{% include note.html content="We recommend against using test mocks. With test mocks, the expectations must be defined before the tested scenario is executed, which breaks the recommended test layout 'arrange-act-assert' (or 'given-when-then') and produces code that's difficult to comprehend. +- [Test spies](http://sinonjs.org/releases/v4.0.1/spies/) are functions that + record arguments, the return value, the value of `this`, and exceptions thrown + (if any) for all its calls. There are two types of spies: Some are + anonymous functions, while others wrap methods that already exist in the system + under test. + +- [Test stubs](http://sinonjs.org/releases/v4.0.1/stubs/) are functions (spies) + with pre-programmed behavior. As spies, stubs can be either anonymous, or wrap + existing functions. When wrapping an existing function with a stub, the original + function is not called. + +- [Test mocks](http://sinonjs.org/releases/v4.0.1/mocks/) + (and mock expectations) are fake methods (like spies) with pre-programmed + behavior (like stubs) as well as pre-programmed expectations. A mock will fail + your test if it is not used as expected. + +{% include note.html content=" +We recommend against using test mocks. With test mocks, the expectations must +be defined before the tested scenario is executed, which breaks the +recommended test layout 'arrange-act-assert' (or 'given-when-then') and also +produces code that's difficult to comprehend. " %} #### Create a stub Repository -When writing an application accessing data in a database, best practice is to use [repositories](Repositories.md) to encapsulate all data-access/persistence-related code and let other parts of the application (typically [controllers](Controllers.md)) to depend on these repositories for data access. To test Repository dependents (for example, Controllers) in isolation, we need to provide a test double, usually as a test stub. - -In traditional object-oriented languages like Java or C#, to enable unit tests to provide a custom implementation of the repository API, the controller needs to depend on an interface describing the API, and the repository implementation needs to implement this interface. The situation is easier in JavaScript and TypeScript. Thanks to the dynamic nature of the language, it’s possible to mock/stub entire classes. - -Creating a test double for a repository class is very easy using the Sinon.JS utility function `createStubInstance`. It's important to create a new stub instance for each unit test in order to prevent unintended re-use of pre-programmed behavior between (unrelated) tests. +When writing an application that accesses data in a database, the best +practice is to use [repositories](Repositories.html) to encapsulate all +data-access/persistence-related code. Other parts of the application +(typically [controllers](Controllers.html)) can then depend on these +repositories for data access. To test Repository dependents +(for example, Controllers) in isolation, we need to provide a test double, +usually as a test stub. + +In traditional object-oriented languages like Java or C#, to enable unit tests +to provide a custom implementation of the repository API, the controller needs +to depend on an interface describing the API, and the repository implementation +needs to implement this interface. The situation is easier in JavaScript and +TypeScript. Thanks to the dynamic nature of the language, it’s possible to +mock/stub entire classes. + +Creating a test double for a repository class is very easy using the Sinon.JS +utility function `createStubInstance`. It's important to create a new stub +instance for each unit test in order to prevent unintended re-use of +pre-programmed behavior between (unrelated) tests. ```ts describe('ProductController', () => { @@ -226,9 +350,12 @@ describe('ProductController', () => { }); ``` -In your unit tests, you will usually want to program the behavior of stubbed methods (what should they return) and then verify that the Controller (unit under test) called the right method with the correct arguments. +In your unit tests, you will usually want to program the behavior of stubbed +methods (what they should return) and then verify that the Controller +(unit under test) called the right method with the correct arguments. -Configure stub's behavior at the beginning of your unit test (in the "arrange" or "given" section): +Configure stub's behavior at the beginning of your unit test +(in the "arrange" or "given" section): ```ts // repository.find() will return a promise that @@ -237,30 +364,42 @@ const findStub = repository.find as sinon.SinonStub; findStub.resolves([{id: 1, name: 'Pen'}]); ``` -Verify how was the stubbed method executed at the end of your unit test (in the "assert" or "then" section): +Verify how was the stubbed method executed at the end of your unit test +(in the "assert" or "then" section): ```ts // expect that repository.find() was called with the first // argument deeply-equal to the provided object -expect(findStub).to.be.calledWithMatch({where: {id: 1}}); +sinon.assert.calledWithMatch({where: {id: 1}}); ``` -See [Unit test your controllers](#unit-test-your-controllers) for a full example. +See [Unit test your controllers](#unit-test-your-controllers) for a +full example. #### Create a stub Service {% include content/tbd.html %} -To be done. The initial beta release does not include Services as a first-class feature. +The initial beta release does not include Services as a first-class +feature. ### Unit test your Controllers -Unit tests should apply to the smallest piece of code possible to ensure other variables and state changes do not pollute the result. A typical unit test creates a controller instance with dependencies replaced by test doubles and directly calls the tested method. The example below gives the controller a stub implementation of its repository dependency, and then ensure the controller called repository's `find()` method with a correct query and returned back the query results. See [Create a stub repository](#create-a-stub-repository) for a detailed explanation. +Unit tests should apply to the smallest piece of code possible to ensure that +other variables and state changes do not pollute the result. A typical unit test +creates a controller instance with dependencies replaced by test doubles and +directly calls the tested method. The example below gives the controller a stub +implementation of its repository dependency, ensures the controller +calls the repository's `find()` method with a correct query, and returns back +the query results. See [Create a stub repository](#create-a-stub-repository) +for a detailed explanation. + +{% include code-caption.html content="test/unit/controllers/product.controller.test.ts" %} -{% include code-caption.html content="test/controllers/product.controller.unit.ts" %} ```ts -import {ProductController, ProductRepository} from '../..'; import {expect, sinon} from '@loopback/testlab'; +import {ProductRepository} from '../../../src/repositories'; +import {ProductController} from '../../../src/controllers'; describe('ProductController (unit)', () => { let repository: ProductRepository; @@ -270,12 +409,12 @@ describe('ProductController (unit)', () => { it('retrieves details of a product', async () => { const controller = new ProductController(repository); const findStub = repository.find as sinon.SinonStub; - findStub.resolves([{id: 1, name: 'Pen'}]); + findStub.resolves([{name: 'Pen', slug: 'pen'}]); - const details = await controller.getDetails(1); + const details = await controller.getDetails('pen'); - expect(details).to.containDeep({name: 'Pen'}); - expect(findStub).to.be.calledWithMatch({where: {id: 1}}); + expect(details).to.containEql({name: 'Pen', slug: 'pen'}); + sinon.assert.calledWithMatch(findStub, {where: {slug: 'pen'}}); }); }); @@ -287,17 +426,23 @@ describe('ProductController (unit)', () => { ### Unit test your models and repositories -In a typical LoopBack application, models and repositories rely on behavior provided by the framework (`@loopback/repository` package) and there is no need to test LoopBack's built-in functionality. However, any additional application-specific API does need new unit tests. +In a typical LoopBack application, models and repositories rely on behavior +provided by the framework (`@loopback/repository` package) and there is no need +to test LoopBack's built-in functionality. However, any additional +application-specific APIs do need new unit tests. -For example, if the `Person` Model has properties `firstname`, `middlename` and `surname` and provides a function to obtain the full name, then you should write unit tests to verify the implementation of this additional method. +For example, if the `Person` Model has properties `firstname`, `middlename` and +`surname` and provides a function to obtain the full name, then you should write +unit tests to verify the implementation of this additional method. -Remember to use [Test data builders](#use-test-data-builders) whenever you need valid data to create a new model instance. +Remember to use [Test data builders](#use-test-data-builders) whenever you need +valid data to create a new model instance. -{% include code-caption.html content="test/unit/models/person.model.unit.ts" %} +{% include code-caption.html content="test/unit/models/person.model.test.ts" %} ```ts -import {Person} from '../../models/person.model' -import {givenPersonData} from '../helpers/database.helpers' +import {Person} from '../../../src/models'; +import {givenPersonData} from '../../helpers/database.helpers'; import {expect} from '@loopback/testlab'; describe('Person (unit)', () => { @@ -307,8 +452,8 @@ describe('Person (unit)', () => { const person = givenPerson({ firstname: 'Jane', middlename: 'Smith', - surname: 'Brown' - })); + surname: 'Brown', + }); const fullName = person.getFullName(); expect(fullName).to.equal('Jane Smith Brown'); @@ -317,8 +462,8 @@ describe('Person (unit)', () => { it('omits middlename when not present', () => { const person = givenPerson({ firstname: 'Mark', - surname: 'Twain' - })); + surname: 'Twain', + }); const fullName = person.getFullName(); expect(fullName).to.equal('Mark Twain'); @@ -331,46 +476,75 @@ describe('Person (unit)', () => { }); ``` -Writing a unit test for a custom repository methods is not straightforward because `CrudRepositoryImpl` is based on legacy loopback-datasource-juggler that was not designed with dependency injection in mind. Instead, use integration tests to verify the implementation of custom repository methods; see [Test your repositories against a real database](#test-your-repositories-against-a-real-database) in [Integration Testing](#integration-testing). +Writing a unit test for custom repository methods is not as straightforward +because `CrudRepository` is based on legacy [loopback-datasource-juggler](https://github.com/strongloop/loopback-datasource-juggler) +which was not designed with dependency injection in mind. Instead, use +integration tests to verify the implementation of custom repository methods. +For more information, refer to [Test your repositories against a real database](#test-your-repositories-against-a-real-database) +in [Integration Testing](#integration-testing). ### Unit test your Sequence -While it's possible to test a custom Sequence class in isolation, it's better to rely on acceptance-level tests in this exceptional case. The reason is that a custom Sequence class typically has many dependencies (which makes test setup too long and complex), and at the same time it provides very little functionality on top of the injected sequence actions. Bugs are much more likely to caused by the way how the real sequence action implementations interact together (which is not covered by unit tests), instead of the Sequence code itself (which is the only thing covered). +While it's possible to test a custom Sequence class in isolation, it's better +to rely on acceptance-level tests in this exceptional case. The reason is that +a custom Sequence class typically has many dependencies (which can make test +setup long and complex), and at the same time it provides very little +functionality on top of the injected sequence actions. Bugs are much more likely +to be caused by the way the real sequence action implementations interact +together (which is not covered by unit tests), instead of the Sequence code +itself (which is the only thing covered). -See [Test Sequence customizations](#test-sequence-customizations) in [Acceptance Testing](#acceptance-testing). +See [Test Sequence customizations](#test-sequence-customizations) in +[Acceptance Testing](#acceptance-end-to-end-testing). ### Unit test your Services {% include content/tbd.html %} -To be done. The initial beta release does not include Services as a first-class feature. +The initial beta release does not include Services as a first-class feature. See the following related GitHub issues: - - Define services to represent interactions with REST APIs, SOAP Web Services, gRPC services, and more: [#522](https://github.com/strongloop/loopback-next/issues/522) - - Guide: Services [#451](https://github.com/strongloop/loopback-next/issues/451) +- Define services to represent interactions with REST APIs, SOAP Web Services, + gRPC services, and more: [#522](https://github.com/strongloop/loopback-next/issues/522) +- Guide: Services [#451](https://github.com/strongloop/loopback-next/issues/451) ## Integration testing -Integration tests are considered "white-box" tests because they use an "inside-out" approach that tests how multiple units work together or with external services. You can use test doubles to isolate tested units from external variables/state that are not part of the tested scenario. +Integration tests are considered "white-box" tests because they use an +"inside-out" approach that tests how multiple units work together or with +external services. You can use test doubles to isolate tested units from +external variables/state that are not part of the tested scenario. ### Test your repositories against a real database There are two common reasons for adding repository tests: - - Your models are using advanced configuration, for example, custom column mappings, and you want to verify this configuration is correctly picked up by the framework. - - Your repositories have additional methods. -Integration tests are one of the places to put the best practices in [Data handling](#data-handling) to work: +- Your models are using an advanced configuration, for example, custom column + mappings, and you want to verify this configuration is correctly picked up by + the framework. +- Your repositories have additional methods. + +Integration tests are one of the places to put the best practices in +[Data handling](#data-handling) to work: - - Clean the database before each test - - Use test data builders - - Avoid sharing the same data for multiple tests +- Clean the database before each test +- Use test data builders +- Avoid sharing the same data for multiple tests -Here is an example showing how to write an integration test for a custom repository method `findByName`: +Here is an example showing how to write an integration test for a custom +repository method `findByName`: + +{% include code-caption.html content= "test/integration/repositories/category.repository.test.ts" %} -{% include code-caption.html content= "tests/integration/repositories/category.repository.integration.ts" %} ```ts -import {givenEmptyDatabase} from '../../helpers/database.helpers.ts'; +import { + givenEmptyDatabase, + givenCategory, +} from '../../helpers/database.helpers'; +import {CategoryRepository} from '../../../src/repositories'; +import {expect} from '@loopback/testlab'; +import {testdb} from '../../fixtures/datasources/testdb.datasource'; describe('CategoryRepository (integration)', () => { beforeEach(givenEmptyDatabase); @@ -378,8 +552,7 @@ describe('CategoryRepository (integration)', () => { describe('findByName(name)', () => { it('return the correct category', async () => { const stationery = await givenCategory({name: 'Stationery'}); - const groceries = await givenCategory({name: 'Groceries'}); - const repository = new CategoryRepository(); + const repository = new CategoryRepository(testdb); const found = await repository.findByName('Stationery'); @@ -389,27 +562,34 @@ describe('CategoryRepository (integration)', () => { }); ``` -### Test Controllers and repositories together +### Test controllers and repositories together -Integration tests running controllers with real repositories are important to verify that the controllers use the repository API correctly, and the commands and queries produce expected results when executed on a real database. These tests are similar to repository tests: we are just adding controllers as another ingredient. +Integration tests running controllers with real repositories are important to +verify that the controllers use the repository API correctly, and that the +commands and queries produce expected results when executed on a real database. +These tests are similar to repository tests with controllers added as +another ingredient. + +{% include code-caption.html content= "test/integration/controllers/product.controller.test.ts" %} ```ts -import {ProductController, ProductRepository, Product} from '../..'; import {expect} from '@loopback/testlab'; -import {givenEmptyDatabase, givenProduct} from '../helpers/database.helpers'; +import {givenEmptyDatabase, givenProduct} from '../../helpers/database.helpers'; +import {ProductController} from '../../../src/controllers'; +import {ProductRepository} from '../../../src/repositories'; +import {testdb} from '../../fixtures/datasources/testdb.datasource'; describe('ProductController (integration)', () => { beforeEach(givenEmptyDatabase); describe('getDetails()', () => { it('retrieves details of the given product', async () => { - const inkPen = await givenProduct({name: 'Pen', slug: 'pen'}); const pencil = await givenProduct({name: 'Pencil', slug: 'pencil'}); - const controller = new ProductController(new ProductRepository()); + const controller = new ProductController(new ProductRepository(testdb)); - const details = await controller.getDetails('pen'); + const details = await controller.getDetails('pencil'); - expect(details).to.eql(pencil); + expect(details).to.containEql(pencil); }); }); }); @@ -419,55 +599,95 @@ describe('ProductController (integration)', () => { {% include content/tbd.html %} -To be done. The initial beta release does not include Services as a first-class feature. +The initial beta release does not include Services as a first-class feature. ## Acceptance (end-to-end) testing -Automated acceptance (end-to-end) tests are considered "black-box" tests because they use an "outside-in" approach that is not concerned about the internals of the system, just simply do the same actions (send the same HTTP requests) as the clients and consumers of your API will do, and verify the results returned by the system under test are matching the expectations. +Automated acceptance (end-to-end) tests are considered "black-box" tests because +they use an "outside-in" approach that is not concerned about the internals of +the system. Acceptance tests perform the same actions (send the same HTTP +requests) as the clients and consumers of your API will do, and verify that the +results returned by the system match the expected results. -Typically, acceptance tests start the application, make HTTP requests to the server, and verify the returned response. LoopBack uses [supertest](https://github.com/visionmedia/supertest) to make the test code that executes HTTP requests and verifies responses easier to write and read. -Remember to follow the best practices from [Data handling](#data-handling) when setting up your database for tests: +Typically, acceptance tests start the application, make HTTP requests to the +server, and verify the returned response. LoopBack uses [supertest](https://github.com/visionmedia/supertest) +to create test code that simplifies both the execution of HTTP requests and the +verification of responses. Remember to follow the best practices from +[Data handling](#data-handling) when setting up your database for tests: - - Clean the database before each test - - Use test data builders - - Avoid sharing the same data for multiple tests +- Clean the database before each test +- Use test data builders +- Avoid sharing the same data for multiple tests ### Validate your OpenAPI specification -The OpenAPI specification is a cornerstone of applications that provide REST APIs. -It enables API consumers to leverage a whole ecosystem of related tooling. To make the spec useful, you must ensure it's a valid OpenAPI Spec document, ideally in an automated way that's an integral part of regular CI builds. LoopBack's [testlab](https://www.npmjs.com/package/@loopback/testlab) module provides a helper method `validateApiSpec` that builds on top of the popular [swagger-parser](https://www.npmjs.com/package/swagger-parser) package. +The OpenAPI specification is a cornerstone of applications that provide +REST APIs. It enables API consumers to leverage a whole ecosystem of related +tooling. To make the spec useful, you must ensure it's a valid OpenAPI Spec +document, ideally in an automated way that's an integral part of regular CI +builds. LoopBack's [testlab](https://www.npmjs.com/package/@loopback/testlab) +module provides a helper method `validateApiSpec` that builds on top of the +popular [swagger-parser](https://www.npmjs.com/package/swagger-parser) package. Example usage: +{% include code-caption.html content= "test/acceptance/api-spec.test.ts" %} + ```ts -// test/acceptance/api-spec.acceptance.ts -import {validateApiSpec} from '@loopback/testlab'; -import {HelloWorldApp} from '../..'; +// test/acceptance/api-spec.test.ts +import {HelloWorldApplication} from '../..'; import {RestServer} from '@loopback/rest'; +import {validateApiSpec} from '@loopback/testlab'; describe('API specification', () => { it('api spec is valid', async () => { - const app = new HelloWorldApp(); + const app = new HelloWorldApplication(); const server = await app.getServer(RestServer); const spec = server.getApiSpec(); - await validateApiSpec(apiSpec); + await validateApiSpec(spec); }); }); ``` ### Perform an auto-generated smoke test of your REST API -The formal validity of your application's spec does not guarantee that your implementation is actually matching the specified behavior. To keep your spec in sync with your implementation, you should use an automated tool like [Dredd](https://www.npmjs.com/package/dredd) to run a set of smoke tests to verify conformance of your app with the spec. +{% include important.html content=" +The top-down approach for building LoopBack applications is not yet fully +supported. Therefore, the code outlined in this section is outdated and may not +work out of the box. It will be revisitedafter our MVP release. +" %} + +The formal validity of your application's spec does not guarantee that your +implementation is actually matching the specified behavior. To keep your spec +in sync with your implementation, you should use an automated tool like [Dredd](https://www.npmjs.com/package/dredd) +to run a set of smoke tests to verify your app conforms to the spec. -Automated testing tools usually require little hints in your specification to tell them how to create valid requests or what response data to expect. Dredd in particular relies on response [examples](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#exampleObject) and request parameter [x-example](http://dredd.org/en/latest/how-to-guides.html#example-values-for-request-parameters) fields. Extending your API spec with examples is good thing on its own, since developers consuming your API will find them useful too. +Automated testing tools usually require hints in your specification +to tell them how to create valid requests or what response data to expect. +Dredd in particular relies on response [examples](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#exampleObject) +and request parameter [x-example](http://dredd.org/en/latest/how-to-guides.html#example-values-for-request-parameters) +fields. Extending your API spec with examples is a good thing on its own, since +developers consuming your API will find them useful too. Here is an example showing how to run Dredd to test your API against the spec: -{% include code-caption.html content= " " %} +{% include code-caption.html content= "test/acceptance/api-spec.test.ts" %} + ```ts +import {expect} from '@loopback/testlab'; +import {HelloWorldApplication} from '../..'; +import {RestServer, RestBindings} from '@loopback/rest'; +import {spec} from '../../apidefs/openapi'; +const Dredd = require('dredd'); + describe('API (acceptance)', () => { + let app: HelloWorldApplication; + // tslint:disable no-any let dredd: any; before(initEnvironment); + after(async () => { + await app.stop(); + }); it('conforms to the specification', done => { dredd.run((err: Error, stats: object) => { @@ -482,13 +702,14 @@ describe('API (acceptance)', () => { }); async function initEnvironment() { - const app = new HelloWorldApp(); - const server = app.getServer(RestServer); + app = new HelloWorldApplication(); + const server = await app.getServer(RestServer); // For testing, we'll let the OS pick an available port by setting // RestBindings.PORT to 0. server.bind(RestBindings.PORT).to(0); // app.start() starts up the HTTP server and binds the acquired port // number to RestBindings.PORT. + await app.boot(); await app.start(); // Get the real port number. const port = await server.get(RestBindings.PORT); @@ -498,35 +719,48 @@ describe('API (acceptance)', () => { options: { level: 'fail', // report 'fail' case only silent: false, // false for helpful debugging info - path: [`${baseUrl}/swagger.json`], // to download apiSpec from the service - } + path: [`${baseUrl}/openapi.json`], // to download apiSpec from the service + }, }; dredd = new Dredd(config); - }); -}) + } +}); ``` -The user experience is not as great as we would like it, we are looking into better solutions; see [GitHub issue #644](https://github.com/strongloop/loopback-next/issues/644). Let us know if you can recommend one! +The user experience needs improvement and we are looking into +better solutions. See [GitHub issue #644](https://github.com/strongloop/loopback-next/issues/644). +Let us know if you have any recommendations! ### Test your individual REST API endpoints -You should have at least one acceptance (end-to-end) test for each of your REST API endpoints. Consider adding more tests if your endpoint depends on (custom) sequence actions to modify the behavior when the corresponding controller method is invoked via REST, compared to behavior observed when the controller method is invoked directly via JavaScript/TypeScript API. For example, if your endpoint returns different response to regular users and to admin users, then you should have two tests: one test for each user role. +You should have at least one acceptance (end-to-end) test for each of your +REST API endpoints. Consider adding more tests if your endpoint depends on +(custom) sequence actions to modify the behavior when the corresponding +controller method is invoked via REST, compared to behavior observed when +the controller method is invoked directly via JavaScript/TypeScript API. +For example, if your endpoint returns different responses to regular users +and to admin users, then you should two tests (one test for each user role). Here is an example of an acceptance test: +{% include code-caption.html content= "test/acceptance/product.test.ts" %} + ```ts -// test/acceptance/product.acceptance.ts -import {HelloWorldApp} from '../..'; -import {RestBindings, RestServer} from '@loopback/rest'; -import {expect, supertest} from '@loopback/testlab'; +import {HelloWorldApplication} from '../..'; +import {expect, createClientForHandler, Client} from '@loopback/testlab'; import {givenEmptyDatabase, givenProduct} from '../helpers/database.helpers'; +import {RestServer, RestBindings} from '@loopback/rest'; +import {testdb} from '../fixtures/datasources/testdb.datasource'; describe('Product (acceptance)', () => { - let app: HelloWorldApp; - let request: supertest.SuperTest; + let app: HelloWorldApplication; + let client: Client; before(givenEmptyDatabase); before(givenRunningApp); + after(async () => { + await app.stop(); + }); it('retrieves product details', async () => { // arrange @@ -540,36 +774,33 @@ describe('Product (acceptance)', () => { available: true, endDate: null, }); + const expected = Object.assign({id: product.id}, product); // act - const response = await request.get('/product/ink-pen') + const response = await client.get('/product/ink-pen'); // assert - expect(response.body).to.deepEqual({ - id: product.id, - name: 'Ink Pen', - slug: 'ink-pen', - price: 1, - category: 'Stationery', - available: true, - description: 'The ultimate ink-powered pen for daily writing', - label: 'popular', - endDate: null, - }); + expect(response.body).to.containEql(expected); }); async function givenRunningApp() { - app = new HelloWorldApp(); + app = new HelloWorldApplication(); + app.dataSource(testdb); const server = await app.getServer(RestServer); server.bind(RestBindings.PORT).to(0); + await app.boot(); await app.start(); - const port: number = await server.get(RestBindings.PORT); - request = supertest(`http://127.0.0.1:${port}`); + client = createClientForHandler(server.handleHttp); } }); ``` ### Test Sequence customizations -Custom sequence behavior is best tested by observing changes in behavior of affected endpoints. For example, if your sequence has an authentication step that rejects anonymous requests for certain endpoints, then you can write a test making an anonymous request to such an endpoint to verify that it's correctly rejected. These tests are essentially the same as the tests verifying implementation of individual endpoints as described in the previous section. +Custom sequence behavior is best tested by observing changes in behavior of the +affected endpoints. For example, if your sequence has an authentication step +that rejects anonymous requests for certain endpoints, then you can write a test +making an anonymous request to those endpoints to verify that it's correctly +rejected. These tests are essentially the same as the tests verifying +implementation of individual endpoints as described in the previous section. From c8639d5a62d731330025562453a462a8d9aef7a7 Mon Sep 17 00:00:00 2001 From: shimks Date: Thu, 15 Mar 2018 14:09:38 -0400 Subject: [PATCH 2/2] fix: change html to md --- docs/site/Booting-an-Application.md | 4 ++-- docs/site/Testing-your-application.md | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/site/Booting-an-Application.md b/docs/site/Booting-an-Application.md index b7120668020d..9340171a0f9f 100644 --- a/docs/site/Booting-an-Application.md +++ b/docs/site/Booting-an-Application.md @@ -185,7 +185,7 @@ a part of the `@loopback/boot` package and loaded automatically via `BootMixin`. ### Controller Booter -This Booter's purpose is to discover [Controller](Controllers.html) type Artifacts and to bind +This Booter's purpose is to discover [Controller](Controllers.md) type Artifacts and to bind them to the Application's Context. You can configure the conventions used in your @@ -201,7 +201,7 @@ of your Application. The `controllers` object supports the following options: ### Repository Booter -This Booter's purpose is to discover [Repository](Repository.html) type Artifacts and to bind +This Booter's purpose is to discover [Repository](Repositories.md) type Artifacts and to bind them to the Application's Context. The use of this Booter requires `RepositoryMixin` from `@loopback/repository` to be mixed into your Application class. diff --git a/docs/site/Testing-your-application.md b/docs/site/Testing-your-application.md index e93f179d48eb..9e486adf9c6f 100644 --- a/docs/site/Testing-your-application.md +++ b/docs/site/Testing-your-application.md @@ -28,8 +28,8 @@ or [`given/when/then`](https://martinfowler.com/bliki/GivenWhenThen.html). Both styles work well, so pick one that you're comfortable with and start writing tests! -For an introduction to automated testing, see [Define your testing strategy](Defining-your-testing-strategy.html). -For a step-by-step tutorial, see [Incrementally implement features](Implementing-features.html). +For an introduction to automated testing, see [Define your testing strategy](Defining-your-testing-strategy.md). +For a step-by-step tutorial, see [Incrementally implement features](Implementing-features.md). {% include important.html content=" A great test suite requires you to think smaller and favor fast and focused @@ -318,9 +318,9 @@ produces code that's difficult to comprehend. #### Create a stub Repository When writing an application that accesses data in a database, the best -practice is to use [repositories](Repositories.html) to encapsulate all +practice is to use [repositories](Repositories.md) to encapsulate all data-access/persistence-related code. Other parts of the application -(typically [controllers](Controllers.html)) can then depend on these +(typically [controllers](Controllers.md)) can then depend on these repositories for data access. To test Repository dependents (for example, Controllers) in isolation, we need to provide a test double, usually as a test stub.