Skip to content
Сергей Трегуб edited this page Oct 23, 2021 · 17 revisions

Cross-Origin Resource Sharing (CORS) and Preflight Requests

Quote from an MDN article:

Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional HTTP headers to tell a browser to let a web application running at one origin (domain) have permission to access selected resources from a server at a different origin. A web application makes a cross-origin HTTP request when it requests a resource that has a different origin (domain, protocol, and port) than its own origin.

By default, all origins, methods and headers are allowed. If you want more control, then you can adjust CORS-policy in the Startup.cs file.

You can find more information in official documentation.

According to MDN:

... "preflighted" requests first send an HTTP request by the OPTIONS method to the resource on the other domain, in order to determine whether the actual request is safe to send. Cross-site requests are preflighted like this since they may have implications to user data.

It means your service must handle OPTIONS-requests to allow cross-site requests in addition to general CORS-requests support as described above. There are three ways at least of implementing that. The project uses the most simple way when the server replies to OPTIONS request with the appropriate answer without actual processing. It works in most cases. You can alter the implementation in the Middleware/OptionsVerbMiddleware.cs file or disable it by removing it from a pipeline in the Startup.cs.

If you use SignalR the configuration should be slightly different because by default it uses an option withCredentials = true. So you should set withCredentials = false on the HubConnectionBuilder options or use the following configuration:

app.UseCors(builder => builder
    .AllowAnyMethod()
    .AllowAnyHeader()
    .AllowCredentials()
    .SetIsOriginAllowed(origin => true)
);

Dependency Injection

Dependency Injection is a form of inversion of control that supports the dependency inversion design principle.
This project uses Autofac as an IoC Container, but it can be replaced with any other if you will.

First of all, you must configure a container. There are two methods in the Startup.cs to do this - ConfigureContainer and ConfigureProductionContainer. The latest only gets called if your environment is Production. Read comments in these methods to get more information.

You can place your registration there directly, but I don't recommend that. According to the official documentation, a recommended way is to use so-called modules. The project already has a default module (Configuration\AutofacModules\DefaultModule.cs) that you can use. Just register your components and services there and have fun. For example:

builder.RegisterType<Repo.ProductsRepo>().As<Repo.IProductsRepo>().SingleInstance();

If you add a new module, please, don't remember to add it to a container in one of the methods - ConfigureContainer or ConfigureProductionContainer:

builder.RegisterModule<DefaultModule>();

Second, you typically use registered types and interfaces in constructors of your controllers or of other services.

IProductsRepo ProductsRepo { get; }

public ProductsController(IProductsRepo productsRepo)
{
    ProductsRepo = productsRepo ?? throw new ArgumentNullException(nameof(productsRepo));
}

productsRepo parameter will be autowired by the container.

You can read more on this in the official Autofac documentation.

AutoMapper

When you create a web-service, it is a typical case to map one object to another. For example, you may want to map entity read from DB to DTO (Data Transmission Object) to return it as a response. You can write custom code to do that mapping or use a ready-made solution. AutoMapper is a solution to this problem.

Quote from the official site:

AutoMapper is a simple little library built to solve a deceptively complex problem - getting rid of code that mapped one object to another

The detailed information about using it in different cases you can get from official documentation.

As an additional advantage of using Automapper, you get segregation of mapping code from using it. Here we focus on the most common cases and some simple examples to give you a starting point.

The project already contains a ready to use configuration for AutoMapper. It is integrated with Autofac so you can inject IMapper interface to your services or controllers, or you can use Autofac to resolve services in mapping code! All the mapping code gathered in so-called profiles. The profile is a class inherited from Automapper.Profile. The project already has one for demo purposes (see Configuration\AutoMapperProfiles\DefaultProfile.cs). You can use that one or add as many profiles as you want. To add a profile, you must create a new class derived from AutoMapper.Profile. At startup time your profile discovered and loaded automatically. If you want to use some service registered in DI-container, you ought to inject its interface in a profile constructor.

Let's say for example you need to return Dto.Product object from some method of your controller, but you have Model.Product in your code. First of all, we should create a mapping from Model.Product to Dto.Product. This can be done in Configuration\AutoMapperProfiles\DefaultProfile.cs:

  CreateMap<Model.Product, Dto.Product>();

Second, use it in your controller:

  [HttpGet]
  public IEnumerable<Dto.Product> Get()
  {
      return ProductsRepo.Get().Select(Mapper.Map<Dto.Product>);
  }

Let's see a couple of examples of how to configure mappings:

cfg.CreateMap<Model.Product, Dto.Product>()
  // Using a custom constructor
  .ConstructUsing(src => new Dto.Product(src.Id))
  // Using a custom formatting
  .ForMember(x => x.Date, o => o.ResolveUsing((src, dest, destMember, context) =>
    $"{src.Year}-{src.Month:D2}-{src.Day:D2}"))
  // Calculated value from another fields of original object
  .ForMember(x => x.Contract, o => o.ResolveUsing((src, dest, destMember, context) =>
    $"{src.Contract.Number} from {src.Contract.Date}"))
  .AfterMap((src, dest, ctx) =>
  {
    // Resolve service from DI-container
    dest.SupplierName = c.Options.CreateInstance<ISuppliersRepo>().GetById(src.SupplierId);
  });

Content formatting

The template preconfigure JSON-formatters to set the most often used options for serialization and deserialization. It enables camel-case property names and enum as string representation. Be aware about that and adjust it if it is not your case.

Logging

Each server in a production environment must log information messages and errors. There are many libraries to do this. This project template uses Serilog. You don't need to do any configuration. All things are configured for you and work without any additional configuration.

To use a logger, you must inject ILogger<YourClass> interface into any controller or service, as you do usually in any ASP.Net Core application. YourClass in that case will be included in all of the events as SourceContext property, so you can use it to filter events or automatically add them to each of the log messages. More about filtering and formatting output see below. Here's an example:

public class ProductsContoller
{
  ILogger Logger { get; }

  public ProductsContoller(ILogger<ProductsContoller> logger)
  {
    Logger = logger ?? throw new ArgumentNullException(nameof(logger));
  }

  public IActionResult Create(int id, [FromBody]Dto.UpdateProduct newProductDto)
  {
    // ... other code
    Logger.LogInformation("New product was created: {@product}", createdProduct);
    // some more code
  }
}

In this case, the SourceContext property of the event equals the full name of the ProductsContoller class with a namespace. You can use Error, Warning, and so on instead of Information in the example above. By default, Debug is used as a minimum log level for a development environment and Information for a production one for all of the context but Microsoft and System. For the Microsoft and System contexts the minimum level is set to Warning. To see all of the messages set minimum level to Verbose (not Trace!), but be careful with that because you can see a lot of text there!

Log files are created in %APPDATA%/Logs folder, specific for the current process' user. For example, if your server is running under the AppService user account, then the folder will be C:\Users\AppService\AppData\Roaming\Logs. A name of the file is the same as a service main assembly name with the .log extension. Log files are rotated after reaching 10 Mb, and the folder stores 10 log files maximum. This will prevent you from being buried under tons of logs.

Another useful option is an output template, which can be changed for each of the sinks separately. I have already included templates for each of the sinks in the appsettings.json. You can read more in the article in the Serilog documentation. One of the useful options is to add the SourceContext property to the output template to see a source of events:

{
  "Name": "Console",
  "Args": {
    "outputTemplate": "[{Timestamp:HH:mm:ss}] [{Level:u3}] [{SourceContext}] {Message:lj}{NewLine}{Exception}"
  }
}

The template already contains some of the so-called enrichers, which you can use as properties in your output template: ThreadId, ProcessId, MachineName, EnvironmentUserName.

Other useful option is to log HTTP requests. It is enabled by this line:

  app.UseSerilogRequestLogging();

It is disabled by default in the Release configuration and enabled in the Debug. You can change this in this line of configuration:

      "Override": {
        "Microsoft": "Warning",
        "Microsoft.Hosting": "Information",
        "System": "Warning",
        "Serilog.AspNetCore":  "Warning"     <--- Change this to Information to see HTTP requests
      }

You can fine-tune your logs and select more granularly what events you want to see in your logs based on the event's properties. There are two options:

  • Use 'Override' to change the minimum level for different source contexts. The more specific option overrides the less specific one:
"MinimumLevel": {
  "Default": "Debug",
  "Override": {
    "Microsoft": "Warning",
    "Microsoft.Hosting": "Information"
  }
}
  • Use filters to select only the events that you want to log. The project contains Serilog.Filters.Expressions which allow you to create expressions to filter event messages. To make it more clear, see the example:
"Serilog": {
  "Filter": [
    {
      "Name": "ByIncludingOnly",
      "Args": [
        "expression": "StartsWith(SourceContext, 'AwesomeService.')"
      ]
    }
  ]
}

This filter includes only the events, that were generated by a source context with the name started from AwesomeService.. SourceContext is the full name of the class as in the example:

namespace AwesomeService 
{

  class ProductsController: Controller
  {
    public FileController(ILogger<ProductsController> logger) {}
  }

}

In this case, the source context equals to AwesomeService.ProductsController.

You can add more than one filter and use either ByIncludingOnly or ByExcluding name.

According to the official documentation:

Unlike other logging libraries, Serilog is built with powerful structured event data in mind.

This means that you can use any event property in the expression, either predefined or custom. Let's see another example. You have some code somewhere:

Logger<ProductsController> logger = ....;

logger.LogInformation("Id = {id}, CompanyName = {name}: {message}", id, name, message);

And the filter:

"Serilog": {
  "Filter": [
    {
      "Name": "ByIncludingOnly",
      "Args": [
        "expression": "SourceContext = 'AwesomeService.ProductsController' and @Level = 'Information' and name like '%soft%'"
      ]
    }
  ]
}

So you can get only the informational events from the AwesomeService.ProductsController class with company name which contains soft.

One more thing I want you to know. You can apply filters not only to all of the sinks, but you can use different filters to the different sinks. This can be used to split log messages to different files or any other sinks. Use expressions to split log output to differents sinks is a little bit tricky. Read the article Serilog and ASP.NET Core: Split Log Data Using Serilog FilterExpression. There you'll find an in-depth explanation.

Cache Control

In most cases, you don't want to allow a browser to cache server responses. The project use Filters/CacheControlFilter.cs to add cache-control header to all responses for any of GET requests. It is enough for the most cases, but you can change this as needed.

Unhandled Exceptions Handling

If something goes wrong and a server has crashed, we want to know what is happening exactly and where it is. It is very useful to log all of the unhandled exceptions.

By default, ASP.Net Core returns 500 Internal Server error HTTP status with the exception message and call stack in case of any exceptions. However, this is not a recommended way to report errors for REST-services. Typically we want to return 401 status code if requested object not found, 403 if the action is not permitted and so on.

The project contains Middleware/ExceptionMiddleware.cs to do this. The idea is simple. We check whether an exception is one of the well-known types and if so, create and return an appropriate HTTP status code and a response and also log additional information about the exception.

Let's see an example:

async Task HandleExceptionAsync(HttpContext context, Exception ex)
{
    int statusCode = 500;

    context.Response.ContentType = "application/json";
    context.Response.StatusCode = statusCode;

    // We can decide what the status code should return
    if (ex is KeyNotFoundException)
    {
        context.Response.StatusCode = StatusCodes.Status404NotFound;
    }
    else if (ex is DuplicateKeyException)
    {
        context.Response.StatusCode = StatusCodes.Status400BadRequest;
    }

    await context.Response.WriteAsync(
        JsonConvert.SerializeObject(
            new ErrorResponse(ex, Environment.IsDevelopment())));

    if (context.Response.StatusCode == StatusCodes.Status500InternalServerError)
    {
        Logger.LogError(ex, "Unhandled exception occurred");
    }
    else
    {
        Logger.LogDebug(ex, "Unhandled exception occurred");
    }
}

You should define your own exceptions and handle them here to get the right HTTP status code. For more information about exceptions see the official documentation.

Model Validation

Before an actual processing a request in your controller you must sure whether the input model is valid. If it is not, you must return an error. To avoid writing this duplicate code in each method of your controller use a global filter. You can see current implementation in the ReferenceProject/Filters/ValidateModelFilter.cs file. Alter it if you need a more sophisticated behavior.

Application Settings

The template uses Options pattern to deal with application settings. Add new settings with these steps:

  • Create a new class for your settings and place it to Settings folder;
  • Add your settings to appsettings.json or appsettings.Development.json;
  • Register your settings in the DI-container in the Configuration/ApplicationSettings.cs file;
  • Inject the settings to any class.

The Configuration/ApplicationSettings.cs file contains an example how to add your settings:

  services.AddOptions<Settings.Products>()
      .AutoBind()
      .SubstituteVariables();

Here you can see a couple of unusual extension methods. The methods got from the Configuration Extensions. The first of them get you rid of repeating the code Configuration.GetSection("Products") for each of the options. The second one allows to use environment variables in options values (see the next section). More information about the extensions you'll find on the project home page.

You can use one of the methods to inject the settings to your class (see an example in a constructor of ProductsController class in the Controllers folder):

  • Use IOptionsSnapshot<Settings.Products> for a scoped service, such as a controller;
  • Use IOptionsMonitor<Settings.Products> for a singleton service.

Using Environment Variables in Application Settings

"Build once - deploy anywhere" is a useful principle that gives us the ability to deploy an app to any environment with ease. Environment variables are a recommended way to store any config options that can vary from one environment to another. If you want to know more about this, you can learn 12 Factor Apps Methodology.

The standard implementation of IConfiguration does not expand environment variables, which you can use in configuration options, but we can use SubstituteVariables extension to do it for us.

Let's see an example.

First, let's add some config option to appsettings.json and use env var in it:

  "Products": {
    "TempFolder": "%TEMP%",
    "BackendServiceUrl": "http://%gateway%/backend"
  }

Second, we should use SubstituteVariables extension method when register our settings class:

  services.AddOptions<Settings.Products>()
      .AutoBind()
      .SubstituteVariables();

Ok, now we can set up any options in environment variables but is there "the right" way to set the env vars up? Of cause, we can set them up on OS-level, but it is not the only way to do this. In the Unix-world, the dot-env files (.env) are a convenient way to set up environment variables. Luckily, we can use DotNetEnv NuGet-package to do the same thing on Windows.

All that we need is to create a file with .env name and put it near the appsettings.json of our app. The Startup.cs file already contains all configurations for that.

So, let's create this file and add our variable from the example above:

GATEWAY=localhost

Now the BackendServiceUrl property of our settings class will return the http://localhost/backend value.

Health Checks

ASP.NET Core offers health checks functionality from the box. The template contains preconfigured health checks with the default functionality. You can alter the configuration as you want. See the official documentation for more information about health checks. Hit the http://localhost:5000/health to see how it works.

Documenting API

Documentation is a crucial thing for those who use your API. However, creating the documentation is a somewhat boring process. Moreover, once you wrote the documentation, you ought to publish it somewhere.

Swagger is a full and comprehensive solution to that problem. Documenting an API is only one of the features, the full list of them you can find on the official site.

This project uses Swashbuckle. "Swagger tooling for API's built with ASP.NET Core. Generate beautiful API documentation, including a UI to explore and test operations, directly from your routes, controllers and models" - as written in it's GitHub repo. This library does automatically create interactive documentation for all controllers' actions and models as an HTML-page which is hosted within your service. You don't need to bother about hosting the documentation anymore. The service, you developed, is turned self-documented. You can use standard .Net comments to add more information.

Once you have an API that can describe itself in Swagger, you've opened the treasure chest of Swagger-based tools including a client generator that can be targeted to a wide range of popular platforms. See swagger-codegen for more details.

Let's see an example.

In this line of code we place an action's description:

/// <summary>
/// Get a product by id
/// </summary>
/// <param name="id">A product id</param>
[ProducesResponseType(typeof(Dto.Product), StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
public Dto.Product GetById(int id)
{
  // implementation
}

This is a documentation for a class Dto.Product:

    /// <summary>
    /// DTO for reading product (-s)
    /// </summary>

    public class Product
    {
        /// <summary>
        /// Product id
        /// </summary>
        public int Id { get; set; }

        /// <summary>
        /// Product name
        /// </summary>
        /// <example>lime</example>
        public string Name { get; set; }
    }

To see a result you should run your service and navigate to http://localhost:5000/swagger in your favorite browser. And magic appears :).

All aspects of Swashbuckle configuration gathered in Configuration/DependenciesConfig.cs and Configuration/MiddlewareConfig.cs files.

Deploying to IIS

Skip this section if you don't use IIS.

To serve your project by IIS, you need a web.config file. It is created by dotnet publish -c Release command. However, the file, created by default, lacks some important options. To address it the project contains its own implementation of a web.config file. dotnet publish command copies it to a destination folder.

IIS and ASP.Net limit maximum lengths of a query string, URL and content by some amount of bytes. You may want to change it in some circumstances. Carefully read comments in the web.config before altering them.