Skip to main content

UnitsNet

· 3 min read
Ahmet Buğra Kösen
Software Developer

Unit conversions can be challenging in applications working with physical quantities. The UnitsNet library, developed for the .NET platform, makes conversions between different measurement systems and physical quantities easy and reliable. In this article, we'll explore the features and use cases of the UnitsNet library.

What is UnitsNet?

UnitsNet is an open-source library developed to perform unit conversions of physical quantities in .NET applications in a simple and reliable way. It supports many measurement types and enables conversions between different units. For example, operations like getting a length in meters as kilometers or converting a weight in kilograms to pounds are extremely easy with UnitsNet.

Supported Unit Types

The UnitsNet library supports a wide range of physical quantities and units. These quantities include:

  • Length: meter, kilometer, mile, inch, foot, etc.
  • Mass: kilogram, gram, ton, pound, etc.
  • Temperature: Celsius, Fahrenheit, Kelvin, etc.
  • Volume: liter, milliliter, gallon, cubic meter, etc.
  • Area: square meter, hectare, acre, etc.
  • Pressure: Pascal, bar, atm, psi, etc.
  • Speed: meter/second, kilometer/hour, mile/hour, etc.
  • Energy: joule, calorie, kilowatt-hour, etc.
  • Power: watt, kilowatt, horsepower, etc.
  • Information: byte, kilobyte, megabyte, gigabyte, terabyte, etc.

UnitsNet provides support for the above quantities and more, making it suitable for a wide variety of engineering and scientific calculations.

UnitsNet Implementation

First, let's add the UnitsNet NuGet package to the project:

dotnet add package UnitsNet

Defining a quantity and converting it to different units with UnitsNet is quite simple:

using UnitsNet;

class Program
{
static void Main()
{
// Define 10km
var distance = Length.FromKilometers(10);

Console.WriteLine($"Meters: {distance.Meters}");
// Meters: 10000

Console.WriteLine($"Miles: {distance.Miles}");
// Miles: 6.2137119223733395

Console.WriteLine($"Yard: {distance.Yards}");
// Yard: 10936.132983377078


// Define 100 MB
var fileSize = Information.FromMegabytes(100);

Console.WriteLine($"Byte: {fileSize.Bytes}");
// Byte: 100000000

Console.WriteLine($"Gigabyte: {fileSize.Gigabytes}");
// Gigabyte: 0.1
}
}

In this code, we define a length of 10 kilometers with Length.FromKilometers(10) and display its values in different units using properties like distance.Meters and distance.Miles. We also convert a 100 MB file size to bytes and gigabytes.

For more information, please check out the project's GitHub page...

Things to Consider When Using UnitsNet

It's useful to pay attention to the following points during usage:

  1. Choosing the Right Quantity: There's a separate class for each quantity (like Length, Mass, Temperature). Make sure you choose the correct quantity to use.
  2. Unit Precision: UnitsNet may perform rounding in some unit conversions. If you're performing operations that require very high precision, it's worth checking the results.
  3. Performance: When working with large datasets, UnitsNet conversion operations may need to be optimized. It's especially beneficial to run performance tests if unit conversions will be done inside large loops.

Conclusion

UnitsNet is a great solution for developers who want to perform unit conversions reliably and easily on the .NET platform. With its wide unit support, simple usage, and powerful conversion features, it provides convenience in scientific, engineering, and everyday applications. If you have unit conversion needs in your projects, I recommend trying UnitsNet.

See you in the next article...

Adding Custom Sounds to Notifications in Expo

· 3 min read
Ali Burhan Keskin
Software Developer

captionless image

Adding notifications to your mobile app with Expo is quite straightforward. Customizing these notifications with unique sounds is a great way to enhance the user experience. However, with Expo, especially on Android devices, some additional settings are necessary.

In this article, we’ll go over the step-by-step process for adding custom sounds to notifications in your Expo project.

Note: This guide assumes you have configured the expo-notifications package and the necessary permissions. Remember that custom notification sounds are only supported when using EAS Build. (See: "Custom notification sounds are only supported when using EAS Build.")

1. Adding Sound Files and Configuring app.config

First, add your sound file according to your project’s file structure. For example, you might place it at src/assets/sounds/bip.mp3. Then, specify this file path in your app.config file:

export default ({ config }: ConfigContext): ExpoConfig => ({
...config,
assetBundlePatterns: ["./src/assets/sounds/*"],
plugins: [ [ "expo-notifications",
{
sounds: ["./src/assets/sounds/bip.mp3"],
},
],
],
});

2. Configuring Notifications

Configuring notifications correctly is crucial. For Android, you need to create a new notification channel to send notifications with a custom sound. Notifications sent through this channel will play the specified sound. For iOS, you don’t need a separate channel; simply specify the sound file directly in the notification content.

Define the notification settings as follows:

Notifications.setNotificationHandler({
handleNotification: async () => ({
shouldShowAlert: true,
shouldPlaySound: true,
shouldSetBadge: true,
}),
});

Then, set up the configuration in App.js within a useEffect:

if (Platform.OS === "android") {
await Notifications.setNotificationChannelAsync("bip", {
name: "BipChannel",
importance: Notifications.AndroidImportance.MAX,
vibrationPattern: [0, 250, 250, 250],
sound: "bip.mp3",
});
}

3. Sending a Notification

Now that the configuration is ready, you can send notifications with a custom sound. Use the following function to send a test notification:

const handleTestNotification = async () => {
if (Platform.OS === "android") {
await Notifications.scheduleNotificationAsync({
content: {
title: "Test Notification",
sound: true,
},
trigger: {
seconds: 1,
channelId: "bip",
},
});
} else {
await Notifications.scheduleNotificationAsync({
content: {
title: "Test Notification",
sound: "bip.mp3",
},
trigger: {
seconds: 1,
},
});
}
};

Note: Remember to set sound: true and channelId: "bip" for Android, and sound: "bip.mp3" for iOS.

Using custom notification sounds in Expo is a powerful way to personalize the user experience of your mobile app. Although there are slight differences between Android and iOS, following these steps allows you to quickly add custom sounds to your project. This will give your app a unique touch and provide a more memorable experience for your users.

Try sending a test notification to see how custom notification sounds work in your projects!

Happy coding!

Resources

Expo Notifications Documentation

Apache Ignite

· 5 min read
Ahmet Buğra Kösen
Software Developer

As the need for database scaling and in-memory data processing increases, so does the search for a powerful tool that can meet these needs. Apache Ignite is an open-source, distributed in-memory data platform that provides a solution to this need. In this article, we'll explore Apache Ignite, look at its core features, and see how to integrate it with .NET Core through a sample project.

What is Apache Ignite?

Apache Ignite is a scalable and high-performance data platform that uses in-memory technologies for data storage and processing. Ignite not only stores data in memory but also offers distributed data processing, SQL and NoSQL data management, data grid, and much more. These features enable you to develop low-latency and high-performance applications.

Ignite is a widely preferred tool for solving database scaling issues or intensive data processing needs. It's an ideal platform especially for big data and real-time processing requirements.

Core Features of Apache Ignite

  • In-Memory Storage: Ignite provides high-speed access by storing your data in memory.
  • Distributed SQL: You can access data using SQL queries, even in a distributed environment.
  • Horizontal Scaling: Ignite can easily scale horizontally by adding nodes.
  • Low Latency: Since Ignite keeps data directly in memory, it offers millisecond-level latency.

.NET Core Integration with Apache Ignite

Apache Ignite, with its .NET Core support, allows you to use in-memory data storage and distributed data processing capabilities in your .NET applications. Let's see how we can use Ignite with .NET Core.

Step 1: Adding the Ignite Library to Your Project

First, we'll start by installing the required NuGet package for Ignite's .NET Core support. You can install this package using the following command:

Install-Package Apache.Ignite

Step 2: Starting the Ignite Server

We'll create a simple console application to start the Ignite server. The Ignite server is the main component we'll use to store data and perform distributed processing. The following code snippet shows an example you can use to start the Ignite server:

using Apache.Ignite;
using Apache.Ignite.Core.Cache.Configuration;
using System;

namespace IgniteDotNetExample
{
class Program
{
static void Main(string[] args)
{
var igniteConfiguration = new IgniteConfiguration
{
IgniteInstanceName = "myIgniteInstance",
WorkDirectory = "./igniteWorkDir",
ClientMode = false // Running as a server node.
};

IIgnite ignite = Ignition.Start(igniteConfiguration);

Console.WriteLine("Ignite server started.");

// Let's create a sample cache.
var cacheConfiguration = new CacheConfiguration
{
Name = "sampleCache",
CacheMode = CacheMode.Partitioned,
AtomicityMode = CacheAtomicityMode.Transactional
};

var cache = ignite.GetOrCreateCache<int, string>(cacheConfiguration);

// Let's add data to the cache.
cache.Put(1, "Hello Ignite!");
string value = cache.Get(1);

Console.WriteLine($"Data in cache: {value}");

Console.ReadLine();
}
}
}

In the code above, the Ignite server is started and a cache named sampleCache is created. This cache will be used to store our data in memory.

Step 3: Two Ignite Instances and Persistent Storage Structure

Let's prepare an example showing how Apache Ignite works with two instances and how to enable persistent storage. In this example, we'll start two Ignite nodes and configure persistent storage to save data on disk.

using Apache.Ignite.Core;
using Apache.Ignite.Core.Cache.Configuration;
using Apache.Ignite.Core.Configuration;
using System;

namespace IgnitePersistentExample
{
class Program
{
static void Main(string[] args)
{
// Starting the first Ignite instance
var igniteConfig1 = new IgniteConfiguration
{
IgniteInstanceName = "igniteInstance1",
WorkDirectory = "./igniteWorkDir1",
DataStorageConfiguration = new DataStorageConfiguration
{
DefaultDataRegionConfiguration = new DataRegionConfiguration
{
Name = "Default_Region",
PersistenceEnabled = true // Persistent storage enabled.
}
}
};

IIgnite ignite1 = Ignition.Start(igniteConfig1);
ignite1.GetCluster().Active(true);
Console.WriteLine("Ignite Instance 1 started.");

// Starting the second Ignite instance
var igniteConfig2 = new IgniteConfiguration
{
IgniteInstanceName = "igniteInstance2",
WorkDirectory = "./igniteWorkDir2",
DataStorageConfiguration = new DataStorageConfiguration
{
DefaultDataRegionConfiguration = new DataRegionConfiguration
{
Name = "Default_Region",
PersistenceEnabled = true // Persistent storage enabled.
}
}
};

IIgnite ignite2 = Ignition.Start(igniteConfig2);
ignite2.GetCluster().Active(true);
Console.WriteLine("Ignite Instance 2 started.");

// Creating cache and adding data
var cacheConfiguration = new CacheConfiguration
{
Name = "persistentCache",
CacheMode = CacheMode.Partitioned,
AtomicityMode = CacheAtomicityMode.Transactional
};

var cache = ignite1.GetOrCreateCache<int, string>(cacheConfiguration);
cache.Put(1, "Persistent Hello Ignite!");
string value = cache.Get(1);

Console.WriteLine($"Data in Persistent Cache: {value}");

Console.ReadLine();
}
}
}

In the code above, two different Ignite instances are started with persistent storage configuration enabled. This way, even if the Ignite servers are restarted, the data will be saved on disk and won't be lost.

Step 4: Accessing Data with Ignite

One of Ignite's most powerful features is SQL support. You can process data stored on Ignite using SQL queries. For example, it's possible to store a user table on Ignite and access the data in this table using SQL queries:

[Serializable]
public class User
{
[QuerySqlField(IsIndexed = true)]
public int Id { get; set; }

[QuerySqlField]
public string Name { get; set; }
}

class Program
{
static void Main(string[] args)
{
IIgnite ignite = Ignition.Start();

var cache = ignite.GetOrCreateCache<int, User>(new CacheConfiguration("userCache", typeof(User)));

// Let's add users.
cache.Put(1, new User { Id = 1, Name = "Ali" });
cache.Put(2, new User { Id = 2, Name = "Ayse" });

// Let's fetch users with an SQL query.
var query = new SqlFieldsQuery("SELECT Id, Name FROM User WHERE Name = ?", "Ali");
var cursor = cache.Query(query);

foreach (var row in cursor)
{
Console.WriteLine($"User: Id={row[0]}, Name={row[1]}");
}

Console.ReadLine();
}
}

In the example above, we created a class named User and stored it in a cache named userCache. Then, we accessed the data stored in this cache using SQL queries.

Conclusion

Apache Ignite is a powerful platform that offers scalable and high-performance in-memory data management and processing solutions. By storing data in memory and providing distributed SQL support, Ignite can help accelerate your .NET applications and increase their efficiency. In this article, we covered the core features of Apache Ignite and how it can be integrated with .NET Core through examples. We also learned how to set up a structure working with persistent storage features using two Ignite instances.

To discover other powerful features that Ignite offers and learn more, you can check out the official Apache Ignite documentation.

Comments and Contributions

Feel free to comment to share your experiences or ask questions about Apache Ignite and .NET Core. For those who want to learn more about developing real-time data processing applications with Apache Ignite, Ignite's powerful features are truly worth exploring.

Mutation Testing

· 5 min read
Ahmet Buğra Kösen
Software Developer

x.png

In software development, unit tests are an indispensable tool for improving code quality and reliability. But how can we tell if our unit tests are truly effective? This is where Mutation Testing comes into play. In this article, we'll explore the concept of Mutation Testing, how it can be applied manually, and how to automate it using Stryker.NET, a popular tool in the .NET ecosystem.


What is Mutation Testing?

Mutation Testing is a technique used to evaluate the effectiveness of your tests. In this method, small changes (mutations) are made to your code to check whether your tests can catch these changes. If your tests fail to detect these mutations, it indicates that your test scenarios need improvement.

Why is it Important?

  • Improves Test Quality: It measures not just whether code is tested, but how effective the tests actually are.
  • Enhances Bug Detection: It helps identify potential bugs at an early stage.
  • Ensures Reliability: It shows how resilient your code is against changes.

How to Perform Mutation Testing Manually?

You can apply mutation testing principles without using automated tools. In this section, we'll demonstrate how to perform mutation testing manually with a simple example.

First, let's write a simple class and its corresponding unit tests.

MathOperations.cs:

namespace MutationDemo;

public class MathOperations
{
public int Add(int a, int b) => a + b;
}

Unit Test:

using Xunit;
using FluentAssertions;

namespace MutationDemo.UnitTests;

public class MathOperationsTests
{
[Fact]
public void Add_ShouldReturnCorrectSum()
{
// Arrange
var mathOperations = new MathOperations();

// Act
var result = mathOperations.Add(2, 3);

// Assert
result.Should().Be(5);
}
}

When we run the test using the dotnet test command in the test project directory, we'll see that our test passes successfully:

Passed!  - Failed:     0, Passed:     1, Skipped:     0, Total:     1, Duration: < 1 ms

Now, let's create a mutation by intentionally introducing a bug in our code. For example, let's replace the + operator with the - operator:

public int Add(int a, int b) => a - b;

After saving the changes, run the test again using dotnet test. The test output should be as follows:

Failed!  - Failed:     1, Passed:     0, Skipped:     0, Total:     1, Duration: < 1 m

The test failure indicates that our test caught this mutation. We've written a great unit test—our test can detect this bug in the code.


You can also try other possible mutations. For example, we can change return a + b; to return a;:

public int Add(int a, int b) => a;

When you run the tests again, the test should still fail. If the tests pass, it indicates that your tests are not comprehensive enough and you need to review them.


What is Stryker.NET?

While mutation testing can be done manually for small projects, it can be time-consuming and complex for larger projects. This is where Stryker.NET comes in. Stryker.NET is an open-source mutation testing tool developed for the .NET platform. It automatically creates mutations in your code and analyzes whether your tests can catch them.

Features

  • Easy Integration: Can be quickly integrated into your existing .NET projects.
  • Flexible Configuration: Compatible with different test frameworks (xUnit, NUnit, MSTest).
  • Detailed Reporting: Provides detailed reports including mutation scores and which mutations were not detected.

Mutation Testing with Stryker.NET

Let's automate the mutation test we performed manually earlier using Stryker.NET on the same project.

Requirements

  • .NET 6 or newer

  • xUnit for unit tests

  • Stryker.NET installed

    To install Stryker.NET as a global tool, run the following command in the terminal:

    dotnet tool install -g dotnet-stryker

Running Mutation Testing with Stryker.NET

Run the following command in your test project directory:

dotnet stryker

This command starts mutation testing with Stryker.NET's default settings. After the tests are completed, Stryker.NET will provide you with a report.

image.png

When we examine the html report generated by Stryker, we can see how many mutations Stryker created and in which parts of the code:

image.png

We can see that the mutation score is 100%. This means we've written our tests to cover the changes made to the Add method.

Stryker.NET provides detailed reports on which mutations were killed (caught by tests) and which survived (not caught by tests). By examining these reports, you can identify which scenarios are missing from your tests.


Conclusion

Mutation Testing is a powerful method for understanding whether your unit tests are truly effective. While it can be applied manually, tools like Stryker.NET can automate this process, saving you time and effort. This way, you can improve your code quality and detect potential bugs at an early stage.

Side Note: I can't describe the disappointment I felt when I saw a mutation score of only 40% in my beloved library that contains nearly 2000 tests that I wrote with great effort 😟 If you don't want to experience the same disappointment, you can improve your test writing techniques by examining mutation reports.

See you in the next article…

Source Control Standards

· 11 min read
Ahmet Buğra Kösen
Software Developer

project-history 1.png

As teams grow, implementing certain standards becomes mandatory. Otherwise, managing projects or source code and maintaining the efficiency of the working environment becomes difficult.

Now that we've given a general answer to questions like "why are we doing this?" or "isn't this a waste of time among all the work?" for the standards we're about to discuss, let's take a look at what we'll cover in this document;

  • Semantic Versioning
  • Perfect Commit;
    • Perfect Commit Messages
    • Conventional Commits
  • Branch Naming

Semantic Versioning (SemVer)

If you're wondering why there's a section about versioning in a document called Source Control Standards, be patient and keep reading 😉

What is Semantic Versioning?

Semantic versioning is a standard way to determine version numbers in software projects. This standard ensures that version numbers are meaningful and predictable. Semantic versioning is typically used in the MAJOR.MINOR.PATCH format.

Let's take a look at what each section means and when it should be incremented:

  1. MAJOR: Incremented when backward-incompatible changes are made.
  2. MINOR: Incremented when backward-compatible new features are added.
  3. PATCH: Incremented when backward-compatible bug fixes are made.

Frame 1.png

Why Should You Use Semantic Versioning?

  • Understandability: You can easily understand how much the software has changed and what kind of effects these changes have on compatibility from the version number.
  • Reliability: Users of the software can better assess the risks of updates by looking at the version number.
  • Collaboration: When team members and users know the meaning of version numbers, they feel more confident about contributing to the project and using the software.

How to Do Semantic Versioning?

Let's assume we're developing an API for an e-commerce site as a team. Below are scenarios and how the version number would change in these scenarios:

  • You're adding basic features to the API and it's not yet suitable for users, meaning you don't have a stable version. In this case, the first version number should be 0.1.0.
  • You added the endpoints necessary for users to log in. In this case, the next version number should be 0.2.0.
  • You noticed there were bugs in the newly added endpoints and released a version fixing them. In this case, the version number should be 0.2.1.
  • You completed the basic features of the API and it's now ready for users. In this case, your first version number should be 1.0.0.
    • Releasing a stable version also means delivering the final trial version. So version numbers 0.2.1 and 1.0.0 can actually be the same. In this case, backward incompatibility is not expected. Backward incompatibilities usually appear in versions after 1.0.0.
  • You added certain features to your API and the current version number is 1.17.4. To improve API performance and fix security vulnerabilities, you updated the framework and packages you use, and consequently made backward-incompatible changes to the API. In this case, your next version number should be 2.0.0.
  • Your business unit asked you to add a new payment infrastructure. In this case, your new version number should be 2.1.0

Now that we've semantically versioned our imaginary e-commerce site, CONCLUSION;

Semantic versioning is a standard for keeping your project organized and understandable. At every stage of the project, we can provide clearer information about the project's status to users and team members by correctly updating version numbers. Using this standard, we can make the software development process more manageable and reliable.

For rules you should follow when applying Semantic Versioning and more information, you can check out the SemVer Official Website.


Perfect Commit Messages

Untitled

Although they may seem unimportant, commit messages are an important part of the software development process. A well-written commit message helps both you and your team members understand the project better. It eliminates questions like "Who made this change and why?" and provides significant time savings, especially for team members who constantly work on different projects.

Writing the perfect commit message is only half the job; breaking changes into parts and planning what should be added to these parts is equally important. Commits can play an important role in how you approach a task. Logically grouping changes into commits also allows you to improve the software development process by planning a task and breaking it into smaller pieces.

This will make you think more about the task and the solution you're producing, not just at the beginning but also when breaking changes into commits and writing the commit message. This can help you review your implementation and perhaps notice overlooked edge cases, missing tests, or anything else you might have forgotten.

How?

Now that we've left the task of properly breaking our development into commits to you, let's answer the question of how we should write the commit message. Consistency is very important here, so teams should first discuss and agree on the following three topics;

  • Style: Plays an important role in making the commit history readable. Includes topics such as grammar, punctuation, capitalization, and line lengths.
  • Content: Standardizing content is not easy. However, it should include information about why the changes were made and how they were implemented, and when necessary, the technical details and effects of the changes.
  • Metadata: Should include additional information such as Issue Tracking IDs, notes indicating whether changes have been tested, or findings or comments obtained during the code review process if necessary.

There's no single way to address these three topics, so it's open to discussion, but most Git commit messages follow a certain pattern. We'll examine this commonly used pattern below.

Template

[subject]

[optional body]

[optional footer(s)]

Subject

Just like in an email, the subject is a very important part. It's usually the first, perhaps the only part people will read, so it should be visually appealing and easy to understand, avoiding unnecessary capitalization and punctuation, and using the right keywords.

The imperative mood is standard; when Git creates a commit on your behalf (for example, when you run git merge or git revert), it uses the imperative mood. This means you should write "Add" instead of "Added" or "Adds". The text in the subject should complete this sentence: "If applied, this commit...". Most teams apply the following rules for commit subjects;

  • Should start with a capital letter
  • Should not end with a period
  • Should be 50 characters or less

Example of a good commit subject: Update configuration files with new staging URL

Again, these rules are not strict rules like "you can't do this, you'll turn to stone if you do". You can even add emojis to your commit messages if you want 😊

Body

The subject is often self-explanatory, but sometimes it's necessary to add more information to the "body" field. We use this field to provide more context about WHAT and WHY changed.

Most teams apply the following rules for commit body;

  • Use a blank line to separate from the subject
  • Organize paragraphs with blank lines or bullet lists etc.
  • Line length should be 72 characters or less

Tim Pope's example can be shown as an example of the goal we should aim for in terms of style:

Short(50 chars or less) summary of changes

More detailed explanatory text, if necessary. Wrap it to 72 characters.
The blank line separating the summary from the body is critical (unless
you omit the body entirely); tools like rebase can get confused if you run
the two together.

Further paragraphs come after blank lines. Bullet points are okay, too.

- Use a hyphen or asterisk for bullet points.
- Capitalize the first letter of each point.

Metadata/Footer

We can add Azure DevOps tasks or user stories, Pull Requests, or Jira tickets related to the commit to the footer. This field is also where deprecated features and backward-incompatible changes should be indicated. Example;

BREAKING CHANGE: <summary>
<blank line>
Fixes #<user story>
Closes #<pr>

When we put it all together, our commits should look like the example below;

Add user authentication feature

- Implemented user authentication using JWT tokens for secure login.
- Added user registration functionality with password hashing for security.

Fixes #123
Closes #456
Not Tested

Conventional Commits

Untitled

On top of the perfect commit we created in the previous section, we're trying to create a roof over the commit message by applying the standards set in the Conventional Commit specification, so we can get a meaningful commit history, obtain various reports from this history, and gain certain capabilities. In other words, we'll make our human-readable commit messages human & machine-readable. Also, this specification is compatible with Semantic Versioning.

Template

<type>[optional scope]: <subject>

[optional body]

[optional footer(s)]

A Conventional Commit must contain the following structural elements;

  1. fix: A commit of type fix fixes a bug in your code (parallel to PATCH in semantic versioning).
  2. feat: A commit of type feat adds a new feature to your code (parallel to MINOR in semantic versioning).
  3. BREAKING CHANGE: A commit with a footer starting with BREAKING CHANGE: or with a ! added after type/scope introduces a backward-incompatible change (parallel to MAJOR in semantic versioning).

Other commonly used types:

  1. docs: Documentation only changes
  2. style: Changes that don't affect the meaning of the code
  3. refactor: Code change that neither fixes a bug nor adds a feature
  4. perf: Performance improvements
  5. test: Adding missing tests or correcting existing tests
  6. build: Changes that affect the build system or external dependencies
  7. ci: Changes to CI configuration files and scripts
  8. chore: Other changes that don't modify src or test files
  9. revert: Reverts a previous commit

Examples

A commit message with subject and breaking change footer:

feat: allow provided config object to extend other configs

BREAKING CHANGE: `extends` key in config file is now used for extending other config files

A commit message with ! to draw attention to breaking change:

feat!: send an email to the customer when a product is shipped

A commit message with scope:

feat(api)!: send an email to the customer when a product is shipped

A commit message without body:

docs: correct spelling of CHANGELOG

For remaining details, check out the Conventional Commits Specification.

Benefits of using Conventional Commits:

  • Automatically generating CHANGELOGs
  • Automatically determining semantic version bumps
  • Communicating the nature of changes to teammates and stakeholders
  • Triggering build and publish processes
  • Making it easier for people to contribute to your projects

Untitled


Branch Naming

Frame 2.png

Before making changes to the code base, we all create a branch. Managing these branches can become difficult in some cases. To prevent this, effectively naming and organizing branches can increase the efficiency of the development process.

Regular Branches

Regular branches in Git are long-lived branches:

  • Master (master/main) Branch: The default production branch
  • Development (dev) Branch: Main development branch for integrating features
  • QA (QA/test) Branch: Branch containing code ready for QA testing

Style

  • Lowercase and hyphens: Use lowercase letters and hyphens to separate words. Example: feature/new-login
  • Alphanumeric characters only: Only use alphanumeric characters (a-z, 0–9) and hyphens
  • Avoid consecutive hyphens: feature--new-login is confusing
  • Don't end with hyphen: feature-new-login- is incorrect
  • Be descriptive: The naming should reflect the work done in the branch

Branch Prefixes

  • feature/: New features. Example: feature/login-system
  • bugfix/: Bug fixes. Example: bugfix/header-styling
  • hotfix/: Critical production fixes. Example: hotfix/critical-security-issue
  • release/: Release preparation. Example: release/v1.0.1
  • docs/: Documentation changes. Example: docs/api-endpoints
  • experimental/: Experimental features. Example: experimental/new-algorithm
  • wip/: Work in progress. Example: wip/refactor-auth-system

Including ticket numbers from project management tools is common:

  • bugfix/EMJ-1789-fix-header-styling
  • feature/US-1288-new-login-system
  • feature/T-1289-new-login-system

Whether to apply these standards or postpone them is up to you. However, we shouldn't forget LeBlanc's law that Robert C. Martin refers to in Clean Code: "Later equals never." 🙂


Sources: