Angular Change Detection and the OnPush Strategy


5

You have started using Angular for all of your favorite projects. You know what Angular has to offer, and how you can leverage it to build amazing web apps. But, there are certain things about Angular, and knowing they can make you better at using Angular for your projects.

Data flow being at the center of almost all things Angular, change detection is something worth knowing about, as it will help you trace bugs much more easily and give you an opportunity to further optimize your apps when working with a complex data set.

1

In this article, you will learn how Angular detects changes in its data structures and how you can make them immutable to make the most out of Angular’s change detection strategies.

Change Detection in Angular

When you change any of your models, Angular detects the changes and immediately updates the views. This is change detection in Angular. The purpose of this mechanism is to make sure the underlying views are always in sync with their corresponding models. This core feature of Angular is what makes the framework tick and is partly the reason why Angular is a neat choice for developing modern web apps.

A model in Angular can change as a result of any of the following scenarios:

  • DOM events (click, hover over, etc.)
  • AJAX requests
  • Timers (setTimer(), setInterval())

Change Detectors

All Angular apps are made up of a hierarchical tree of components. At runtime, Angular creates a separate change detector class for every component in the tree, which then eventually forms a hierarchy of change detectors similar to the hierarchy tree of components.

Whenever change detection is triggered, Angular walks down this tree of change detectors to determine if any of them have reported changes.

The change detection cycle is always performed once for every detected change and starts from the root change detector and goes all the way down in a sequential fashion. This sequential design choice is nice because it updates the model in a predictable way since we know component data can only come from its parent.

2

The change detectors provide a way to keep track of the component’s previous and current states as well as its structure in order to report changes to Angular.

If Angular gets the report from a change detector, it instructs the corresponding component to re-render and update the DOM accordingly.

Change Detection Strategies

Value vs. Reference Types

In order to understand what a change detection strategy is and why it works, we must first understand the difference between value types and reference types in JavaScript. If you are already familiar with how this works, you can skip this section.

To get started, let’s review value types and reference types and their classifications.

Value Types

  • Boolean
  • Null
  • Undefined
  • Number
  • String

For simplicity, one can imagine that these types simply store their value on the stack memory (which is technically not true but it’s sufficient for this article). See the stack memory and its values in the image below for example.

3

Reference Types

  • Arrays
  • Objects
  • Functions

These types are a bit more complicated as they store a reference on the stack memory, which points to their actual value on the heap memory. You can see how stack memory and heap memory work together in the example image below. We see the stack memory references the actual values of the reference type in the heap memory.

4

The important distinction to make between value types and reference types is that, in order to read the value of the value type, we just have to query the stack memory, but in order to read the value of a reference type, we need to first query the stack memory to get the reference and then secondly use that reference to query the heap memory to locate the value of the reference type.

Default Strategy

As we stated earlier, Angular monitors changes on the model in order to make sure it catches all of the changes. It will check for any differences between the previous state and current state of the overall application model.

The question that Angular asks in the default change detection strategy is: Has any value in the model changed? But for a reference type, we can implement strategies so that we can ask a better question. This is where OnPush change detection strategy comes in.

OnPush Strategy

The main idea behind the OnPush strategy manifests from the realization that if we treat reference types as immutable objects, we can detect if a value has changed much faster. When a reference type is immutable, this means every time it is updated, the reference on the stack memory will have to change. Now we can simply check: Has the reference (in the stack) of the reference type changed? If yes, only then check all the values (on the heap). Refer back to the previous stack heap diagrams if this is confusing.

The OnPush strategy basically asks two questions instead of one. Has the reference of the reference type changed? If yes, then have the values in heap memory changed?

For example, assume we have an immutable array with 30 elements and we want to know if there are any changes. We know that, in order for there to be any updates to the immutable array, the reference (on the stack) of it would have to have changed. This means we can initially check to see if the reference to the array is any different, which would potentially save us from doing 30 more checks (in the heap) to determine which element is different. This is called the OnPush strategy.

So, you might ask, what does it mean to treat reference types as immutable? It means we never set the property of a reference type, but instead reassign the value all together. See below:

Treating objects as mutable:

static mutable() {
  var before = {foo: "bar"};
  var current = before;
  current.foo = "hello";
  console.log(before === current);
  // => true
}

Treating objects as immutable:

static mutable() {
  var before = {foo: "bar"};
  var current = before;
  current = {foo "hello"};
  console.log(before === current);
  // => false
}

Note that, in the examples above, we are “treating” reference types as immutable by convention, so in the end we are still working with mutable objects, but just “pretending” they are immutable.

So how do you implement OnPush strategy for a component? All you need to do is add the changeDetectionparameter in their @Component annotation.

import {ChangeDetectionStrategy, Component} from '@angular/core';

@Component({
  // ...
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class OnPushComponent {
  // ...
}

Immutable.js

It is a good idea to enforce immutability if one decides to use the OnPush strategy on an Angular component. That is where Immutable.js comes in.

Immutable.js is a library created by Facebook for immutability in JavaScript. They have many immutable data structures, like List, Map, and Stack. For the purposes of this article, List and Map will be illustrated. For more reference, please check out the official documentation here.

In order to add Immutable.js to your projects, please make sure to go into your terminal and run:

$ npm install immutable --save

Also make sure to import the data structures you are using from Immutable.js in the component where you are using it.

import {Map, List} from 'immutable';

This is how an Immutable.js Map can be used:

var foobar = {foo: "bar"};
var immutableFoobar = Map(foobar);

console.log(immutableFooter.get("foo"));
// => bar

And, an Array can be used:

var helloWorld = ["Hello", "World!"];
var immutableHelloWorld = List(helloWorld);
console.log(immutableHelloWorld.first());
// => Hello
console.log(immutableHelloWorld.last());
// => World!

helloWorld.push("Hello Mars!");
console.log(immutableHelloWorld.last());
// => Hello Mars!

Drawbacks of Using Immutable.js

There are a couple main arguable drawbacks of using Immutable.js.

As you may have noticed, it’s a bit cumbersome to use its API, and a traditional JavaScript developer may not like this. A more serious problem has to do with not being able to implement interfaces for your data model since Immutable.js doesn’t support interfaces.

Wrap Up

You may be asking why the OnPush strategy is not the default strategy for Angular. I presume it is because Angular didn’t want to force JavaScript developers to work with immutable objects. But, that doesn’t mean you are forbidden from using it.

If that is something you want to leverage in your next web project, you now know how easy Angular makes it to switch to a different change detection strategy.

Feel free to share on social networks. Find the buttons below this post. This opinion article is for informational purposes only.

Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

This is article is written by  Ahmet Shapiro-Erciyas

Brought to you by Toptal

Edited by Temitope Adelekan


					
Advertisements

Hiring a Part-Time CFO: Key Duties of the Strategic CFO Role


Image result for CFO

Why Hire a Part-Time CFO?

The first reaction to this title would be, why would you just want a part-time CFO – they are a key player in the business? If your business is growing fast, or you have complex accounting projects to implement or even an important fundraising event coming up, a CFO is the obvious solution to look for. Traditionally, this would manifest in the form of a search for a full-time CFO, but at times a temporary one can bring certain strategic advantages. For example, consider the following scenarios:

  • First-time CFO: Key hires are critical. Getting them wrong can be costly from a time and financial perspective. A part-time CFO allows the management to dry test the role and see what results it brings and the traits that the role demands
  • Firefighting: An experienced outsider can come in and solve a complex issue using earned knowledge from having already solved it in other organizations.
  • Caretaker: If there is an interim period where the CFO chair is vacant (gardening leave, maternity leave) an interim can come in and maintain continuity.
  • Jolt of ideas: At times, companies can become myopic in their views. Bringing in an outsider for a period can allow for fresh ideas from someone whose goals are very transparent.
  • The Closer: If there is a significant funding or exit event approaching, a part-time CFO can lead the business over the line. With timing and bonus related advantages, this can be an excellent way of achieving the desired result

In terms of cost, the median annual CFO salary in the USA is between $406,474 and $639,000 for public and private companies respectively. If a headhunter was used, their total fees can then add approximately $60,000 to the bill. This is not a cheap exercise and, if done poorly, can be expensive and bring hidden costs to the business from inertia. On the other hand, you could hire a seasoned professional as a part-time CFO within a matter of days and at an annual cost of between $156,000 and $208,000. This also comes with the flexibility of no termination fees and the license to roll the contract over on a weekly basis. The position could also change throughout the year, giving the opportunity to bring in specialists to tackle specific problems. Why have a generalist, when you can hire specialists at a third of the cost?

There are four key pillars to the role of the CFO; in this article, we will look at these roles in the context of how an interim CFO may be required and can help. Each section will also detail desirable characteristics that such a role requires.

Accounting

While they will not necessarily be preparing the financial statements, a CFO is the person who has to sign off on them and defend them to management, investors, and regulators. Control is a key element of the role along with ensuring that financial records are presented in a timely, accurate, and informative manner.

image alt text

The accounting process for a team with a tenured CFO is a well-oiled machine. Controllers and accountants will prepare the finances and the CFO will communicate business decisions down the chain, review the figures, and report them upwards. However, when seeking a part-time CFO, there is likely to be a more hands-on role required of them the accounting process. Perhaps:

  • The process is a mess. Cracks may have been papered over before in processes such as reconciliations.
  • A modernization process is required with software.
  • A more strategic insight is required for the board pack and reporting channels.
  • The team needs to be trained for new business units/incorporating acquisitions.

Such a role will require a strong set of project management and delegation skills. An ability to communicate clearly and inspire a potentially jaded/overworked accounting department also cannot be overlooked. When assessing CFOs for this role, pay attention to their most recent roles and the prestige of their last company. A CFO who has worked in a large, traditional organization may not have received enough exposure in recent years to changing accounting technology and may not be able to cope with the stress and flux of an accounting clean-up operation.

Treasury

CFOs also oversee the treasury function of an organization and, depending on the size of the company, they will delegate day-to-day management of it to a treasurer. For that, the main concerns and responsibilities for a CFO are to oversee on a high level the following:

  • Risk: FX, interest rates, counterparties, operational risk
  • Liquidity: Working capital management and availability of credit lines
  • Capital Structure: Asset liability management and balance sheet composition

A strategic CFO will view the balance sheet as a ship that they sail and one that can be powerfully maneuvered to the benefit of the organization. During the Financial Crisis of 2008, balance sheets became the focal point of all businesses. Strategic CFOs were those that were both defensive and proactive to this. They pivoted their capital structures towards longer dated liabilities and shorter dated assets, giving their businesses the breathing space to operate and prepare for future opportunities.

image alt text

While from an accounting perspective a CFO will need book smarts and project management, on a treasury level, a strong understanding of the financial markets takes over. Prior experience of dealing with financial instruments will ensure that they are privy to the machinations of foreign exchange risk and the suffocation that poor working capital management can enact on new business growth.

The best CFOs are those that sit with a commercial hat on and view their role as as a business generator, not a supporter. If working capital management is poor, a company might be bleeding money away on unnecessary interest payments on borrowing. On the other side of the coin, unnecessarily high cash reserves are a sign of a CFO that doesn’t have creativity or proactivity.

A CFO with a treasury background in banking will be exceptionally strong on the liquidity side, but perhaps not so much on working capital. Conversely, a CFO with a manufacturing background will deviate to the opposite. For that, dependent on the pressing treasury needs in the organization, pay attention to the trained background of the candidate.

Capital Markets

Treasury duties are mostly concerned with cash flow from operations, in terms of cash flow from financing, a far bigger picture is required. Tapping capital markets and managing investors is a role that a CFO must instigate and lead; funding the company with significant capital ranging from small angel rounds, through to IPOs and then institutional bond issuances.

Part-time CFOs are normally hired especially for a role like tapping the capital markets. If the founding team is not experienced in this area, their negotiation power can be boosted by bringing in a skilled operator to lead the process. Again, this demonstrates the value of a part-time CFO, where specialist expertise can be brought in for a specific project. By the time a startup is raising Series A, where a priced round is being negotiated, they must be prepared to ensure that the appropriate funding arrives at a fair valuation. Yet, for early stage companies looking for seed rounds, it is more important to have a CFO that has the creativity to find investors and communicate the long-term vision for the business. Bringing in different part-time CFOs for different stages of fundraising can be strategically critical for startups.

image alt text

Capital can be raised through debt or equity, from public or private sources. The CFO should be able to clearly inform the management team both when the company should be raising and what type of instrument they need. The previously mentioned areas of accounting and treasury lead into this role, as effective management of those responsibilities will provide perfect insight as to when capital needs to be raised.

While the CEO is an important public figurehead for companies, the CFO also plays an important role communicating the financial side. They are expected to be made available to investors to provide deeper color to the financial statement numbers. Being able to communicate a vision for the the company based on numbers, not rhetoric, is important. When assessing candidates for this role, pay attention to their confidence and communication skills.

Business Strategy

Effective CFOs provide a different perspective in both the boardroom and on the business front line. They can be the voice of reason that drive revenue growth through margin expansion, pointing out cost savings and maintaining the cadence between making sales and making profits.

A CFO will have no particular bias to a certain business line, geography, or customer. Their metric is the financial stability and health of the company and for that, they can be a crucial form of impartial advice at a strategic level. The viewpoint of a CFO can be more long-term than other divisional heads, because their responsibilities stretch that far, for example: “can our debt be paid off in ten year’s time?”

Candidates that have a diverse mix of operating and financial roles throughout their career demonstrate these traits. Especially someone that has worked in sales or product development, where they can then marry the needs of the revenue generators with the support staff. According to Ernst & Young, when CFOs were asked to asses their input to strategy, the majority of respondents say that their most valuable input is across all strategy, as opposed to specifically within the realms of financial guidance:

Table 1: Survey of Where CFOs See Where Their Strategic Input Is Required

When interviewing part-time CFOs, you need to find someone that can grasp the business quickly and hit the ground running. Ask them questions related to your business, pose a commercial problem to them, and see how they respond. A confident and well-rounded CFO will relish this opportunity and will not shy away from it.

Summary

Gone are the days where CFOs are bookkeepers. They are a differentiated tool to have in in the management arsenal. The more flexibility and diverse career experience that they have the more they will be able to offer your business. Aside from paying attention to their classical training in finance (the bedrock for the job), look at their commercial nous and ability to grasp the bigger picture and link parts of the business together.

Feel free to share on social networks. Find the buttons below this post. This opinion article is for informational purposes only.

Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

This is article is written by  Toptal

Brought to you by Toptal

Edited by Temitope Adelekan

29 Microsoft Excel Hacks to Make Life Easier and More Productive (Infographic)


Related image

Regardless of what industry you work in, chances are you’ve probably dealt with spreadsheets in some capacity. Using spreadsheets typically means using Microsoft Excel.

Excel is far and away the leading spreadsheet software as it is used by most businesses. However, Excel has a bit of a learning curve, and if you’re new to the program, it can be a bit overwhelming.

Fortunately, GetVoIP has created an infographic that shares tips, shortcuts, and hacks for using Microsoft Excel more efficiently. These hacks include:

  • Selecting all cells.
  • Inserting new rows or columns.
  • Bolding, italicising and underlining text.
  • Inserting date and time.
  • Switching between formulas and values e.t.c

Below is the visual to learn the highlighted tricks and other helpful keyboard shortcuts for Excel.

Feel free to share on social networks. Find the buttons below this post.

Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

 

 

Brought to you by GetVoip

Edited by Temitope Adelekan

Creating a Ruby DSL: A Guide to Advanced Metaprogramming


Domain specific languages (DSL) are an incredibly powerful tool for making it easier to program or configure complex systems. They are also everywhere—as a software engineer you are most likely using several different DSLs on a daily basis.

Ruby 1

In this article, you will learn what domain specific languages are, when they should be used, and finally how you can make your very own DSL in Ruby using advanced metaprogramming techniques.

This article builds upon Nikola Todorovic’s introduction to Ruby metaprogramming, also published on the Toptal Blog. So if you are new to metaprogramming, make sure you read that first.

What Is a Domain Specific Language?

The general definition of DSLs is that they are languages specialized to a particular application domain or use case. This means that you can only use them for specific things—they are not suitable for general-purpose software development. If that sounds broad, that’s because it is—DSLs come in many different shapes and sizes. Here are a few important categories:

  • Markup languages such as HTML and CSS are designed for describing specific things like the structure, content, and styles of web pages. It is not possible to write arbitrary algorithms with them, so they fit the description of a DSL.
  • Macro and query languages (e.g., SQL) sit on top of a particular system or another programming language and are usually limited in what they can do. Therefore they obviously qualify as domain specific languages.
  • Many DSLs do not have their own syntax—instead, they use the syntax of an established programming language in a clever way that feels like using a separate mini-language.

This last category is called an internal DSL, and it is one of these that we are going to create as an example very soon. But before we get into that, let’s take a look at a few well-known examples of internal DSLs. The route definition syntax in Rails is one of them:

Rails.application.routes.draw do
  root to: "pages#main"

  resources :posts do
    get :preview

    resources :comments, only: [:new, :create, :destroy]
  end
end

This is Ruby code, yet it feels more like a custom route definition language, thanks to the various metaprogramming techniques that make such a clean, easy-to-use interface possible. Notice that the structure of the DSL is implemented using Ruby blocks, and method calls such as get and resources are used for defining the keywords of this mini-language.

Metaprogramming is used even more heavily in the RSpec testing library:

describe UsersController, type: :controller do
  before do
    allow(controller).to receive(:current_user).and_return(nil)
  end

  describe "GET #new" do
    subject { get :new }

    it "returns success" do
      expect(subject).to be_success
    end
  end
end

This piece of code also contains examples for fluent interfaces, which allow declarations to be read out loud as plain English sentences, making it a lot easier to understand what the code is doing:

# Stubs the `current_user` method on `controller` to always return `nil`
allow(controller).to receive(:current_user).and_return(nil)

# Asserts that `subject.success?` is truthy
expect(subject).to be_success

Another example of a fluent interface is the query interface of ActiveRecord and Arel, which uses an abstract syntax tree internally for building complex SQL queries:

Post.                               # =>
  select([                          # SELECT
    Post[Arel.star],                #   `posts`.*,
    Comment[:id].count.             #     COUNT(`comments`.`id`)
      as("num_comments"),           #       AS num_comments
  ]).                               # FROM `posts`
  joins(:comments).                 # INNER JOIN `comments`
                                    #   ON `comments`.`post_id` = `posts`.`id`
  where.not(status: :draft).        # WHERE `posts`.`status` <> 'draft'
  where(                            # AND
    Post[:created_at].lte(Time.now) #   `posts`.`created_at` <=
  ).                                #     '2017-07-01 14:52:30'
  group(Post[:id])                  # GROUP BY `posts`.`id`

Although the clean and expressive syntax of Ruby along with its metaprogramming capabilities makes it uniquely suited for building domain specific languages, DSLs exist in other languages as well. Here is an example of a JavaScript test using the Jasmine framework:

describe("Helper functions", function() {
  beforeEach(function() {
    this.helpers = window.helpers;
  });

  describe("log error", function() {
    it("logs error message to console", function() {
      spyOn(console, "log").and.returnValue(true);
      this.helpers.log_error("oops!");
      expect(console.log).toHaveBeenCalledWith("ERROR: oops!");
    });
  });
});

This syntax is perhaps not as clean as that of the Ruby examples, but it shows that with clever naming and creative use of the syntax, internal DSLs can be created using almost any language.

The benefit of internal DSLs is that they don’t require a separate parser, which can be notoriously difficult to implement properly. And because they use the syntax of the language they are implemented in, they also integrate seamlessly with the rest of the codebase.

What we have to give up in return is syntactic freedom—internal DSLs have to be syntactically valid in their implementation language. How much you have to compromise in this regard depends largely on the selected language, with verbose, statically typed languages such as Java and VB.NET being on one end of the spectrum, and dynamic languages with extensive metaprogramming capabilities such as Ruby on the other end.

Building Our Own—A Ruby DSL for Class Configuration

The example DSL we are going to build in Ruby is a reusable configuration engine for specifying the configuration attributes of a Ruby class using a very simple syntax. Adding configuration capabilities to a class is a very common requirement in the Ruby world, especially when it comes to configuring external gems and API clients. The usual solution is an interface like this:

MyApp.configure do |config|
  config.app_id = "my_app"
  config.title = "My App"
  config.cookie_name = "my_app_session"
end

Let’s implement this interface first—and then, using it as the starting point, we can improve it step by step by adding more features, cleaning up the syntax, and making our work reusable.

What do we need to make this interface work? The MyApp class should have a configure class method that takes a block and then executes that block by yielding to it, passing in a configuration object that has accessor methods for reading and writing the configuration values:

class MyApp
  # ...

  class << self
    def config
      @config ||= Configuration.new
    end

    def configure
      yield config
    end
  end

  class Configuration
    attr_accessor :app_id, :title, :cookie_name
  end
end

Once the configuration block has run, we can easily access and modify the values:

MyApp.config
=> #<MyApp::Configuration:0x2c6c5e0 @app_id="my_app", @title="My App", @cookie_name="my_app_session">

MyApp.config.title
=> "My App"

MyApp.config.app_id = "not_my_app"
=> "not_my_app"

So far, this implementation does not feel like a custom language enough to be considered a DSL. But let’s take things one step at a time. Next, we will decouple the configuration functionality from the MyApp class and make it generic enough to be usable in many different use cases.

Making It Reusable

Right now, if we wanted to add similar configuration capabilities to a different class, we would have to copy both the Configuration class and its related setup methods into that other class, as well as edit the attr_accessor list to change the accepted configuration attributes. To avoid having to do this, let’s move the configuration features into a separate module called Configurable. With that, our MyApp class will look like this:

class MyApp
  include Configurable

  # ...
end

Everything related to configuration has been moved to the Configurable module:

module Configurable
  def self.included(host_class)
    host_class.extend ClassMethods
  end

  module ClassMethods
    def config
      @config ||= Configuration.new
    end

    def configure
      yield config
    end
  end

  class Configuration
    attr_accessor :app_id, :title, :cookie_name
  end
end

Not much has changed here, except for the new self.included method. We need this method because including a module only mixes in its instance methods, so our config and configure class methods will not be added to the host class by default. However, if we define a special method called included on a module, Ruby will call it whenever that module is included in a class. There we can manually extend the host class with the methods in ClassMethods:

def self.included(host_class)     # called when we include the module in `MyApp`
  host_class.extend ClassMethods  # adds our class methods to `MyApp`
end

We are not done yet—our next step is to make it possible to specify the supported attributes in the host class that includes the Configurable module. A solution like this would look nice:

class MyApp
  include Configurable.with(:app_id, :title, :cookie_name)

  # ...
end

Perhaps somewhat surprisingly, the code above is syntactically correct—include is not a keyword but simply a regular method that expects a Module object as its parameter. As long as we pass it an expression that returns a Module, it will happily include it. So, instead of including Configurable directly, we need a method with the name with on it that generates a new module that is customized with the specified attributes:

module Configurable
  def self.with(*attrs)
    # Define anonymous class with the configuration attributes
    config_class = Class.new do
      attr_accessor *attrs
    end

    # Define anonymous module for the class methods to be "mixed in"
    class_methods = Module.new do
      define_method :config do
        @config ||= config_class.new
      end

      def configure
        yield config
      end
    end

    # Create and return new module
    Module.new do
      singleton_class.send :define_method, :included do |host_class|
        host_class.extend class_methods
      end
    end
  end
end

There is a lot to unpack here. The entire Configurable module now consists of just a single with method, with everything happening within that method. First, we create a new anonymous class with Class.new to hold our attribute accessor methods. Because Class.new takes the class definition as a block and blocks have access to outside variables, we are able to pass the attrs variable to attr_accessor without problems.

def self.with(*attrs)           # `attrs` is created here# ...

  config_class = Class.new do   # class definition passed in as a blockattr_accessor *attrs        # we have access to `attrs` hereend

The fact that blocks in Ruby have access to outside variables is also the reason why they are sometimes called closures, as they include, or “close over” the outside environment that they were defined in. Note that I used the phrase “defined in” and not “executed in”. That’s correct – regardless of when and where our define_method blocks will eventually be executed, they will always be able to access the variables config_class and class_methods, even after the with method has finished running and returned. The following example demonstrates this behavior:

def create_block

  foo = "hello"            # define local variablereturn Proc.new { foo }  # return a new block that returns `foo`end



block = create_block       # call `create_block` to retrieve the block



block.call                 # even though `create_block` has already returned,

=> "hello"                 #   the block can still return `foo` to us

Now that we know about this neat behavior of blocks, we can go ahead and define an anonymous module in class_methods for the class methods that will be added to the host class when our generated module is included. Here we have to use define_method to define the config method, because we need access to the outside config_class variable from within the method. Defining the method using the def keyword would not give us that access because regular method definitions with def are not closures – however, define_method takes a block, so this will work:

config_class = # ...               # `config_class` is defined here# ...

class_methods = Module.new do      # define new module using a block

  define_method :config do         # method definition with a block@config ||= config_class.new   # even two blocks deep, we can stillend                              #   access `config_class`

Finally, we call Module.new to create the module that we are going to return. Here we need to define our self.included method, but unfortunately we cannot do that with the def keyword, as the method needs access to the outside class_methods variable. Therefore, we have to use define_method with a block again, but this time on the singleton class of the module, as we are defining a method on the module instance itself. Oh, and since define_method is a private method of the singleton class, we have to use send to invoke it instead of calling it directly:

class_methods = # ...# ...Module.new do

  singleton_class.send :define_method, :included do |host_class|

    host_class.extend class_methods  # the block has access to `class_methods`endend

Phew, that was some pretty hardcore metaprogramming already. But was the added complexity worth it? Take a look at how easy it is to use and decide for yourself:

class SomeClass
  include Configurable.with(:foo, :bar)

  # ...
end

SomeClass.configure do |config|
  config.foo = "wat"
  config.bar = "huh"
end

SomeClass.config.foo
=> "wat"

But we can do even better. In the next step we will clean up the syntax of the configure block a little bit to make our module even more convenient to use.

Cleaning Up the Syntax

There is one last thing that is still bothering me with our current implementation—we have to repeat configon every single line in the configuration block. A proper DSL would know that everything within the configure block should be executed in the context of our configuration object and enable us to achieve the same thing with just this:

MyApp.configure do
  app_id "my_app"
  title "My App"
  cookie_name "my_app_session"
end

Let’s implement it, shall we? From the looks of it, we will need two things. First, we need a way to execute the block passed to configure in the context of the configuration object so that method calls within the block go to that object. Second, we have to change the accessor methods so that they write the value if an argument is provided to them and read it back when called without an argument. A possible implementation looks like this:

module Configurable
  def self.with(*attrs)
    not_provided = Object.new
  
    config_class = Class.new do
      attrs.each do |attr|
        define_method attr do |value = not_provided|
          if value === not_provided
            instance_variable_get("@#{attr}")
          else
            instance_variable_set("@#{attr}", value)
          end
        end
      end

      attr_writer *args
    end

    class_methods = Module.new do
      # ...

      def configure(&block)
        config.instance_eval(&block)
      end
    end

    # Create and return new module
    # ...
  end
end

The simpler change here is running the configure block in the context of the configuration object. Calling Ruby’s instance_eval method on an object lets you execute an arbitrary block of code as if it was running within that object, which means that when the configuration block calls the app_id method on the first line, that call will go to our configuration class instance.

The change to the attribute accessor methods in config_class is a bit more complicated. To understand it, we need to first understand what exactly attr_accessor was doing behind the scenes. Take the following attr_accessor call for example:

class SomeClass
  attr_accessor :foo, :bar
end

This is equivalent to defining a reader and writer method for each specified attribute:

class SomeClass
  def foo
    @foo
  end

  def foo=(value)
    @foo = value
  end

  # and the same with `bar`
end

So when we wrote attr_accessor *attrs in the original code, Ruby defined the attribute reader and writer methods for us for every attribute in attrs—that is, we got the following standard accessor methods: app_id, app_id=, title, title= and so on. In our new version, we want to keep the standard writer methods so that assignments like this still work properly:

MyApp.config.app_id = "not_my_app"
=> "not_my_app"

We can keep auto-generating the writer methods by calling attr_writer *attrs. However, we can no longer use the standard reader methods, as they also have to be capable of writing the attribute to support this new syntax:

MyApp.configure do
  app_id "my_app" # assigns a new value
  app_id          # reads the stored value
end

To generate the reader methods ourselves, we loop over the attrs array and define a method for each attribute that returns the current value of the matching instance variable if no new value is provided and writes the new value if it is specified:

not_provided = Object.new
# ...
attrs.each do |attr|
  define_method attr do |value = not_provided|
    if value === not_provided
      instance_variable_get("@#{attr}")
    else
      instance_variable_set("@#{attr}", value)
    end
  end
end

Here we use Ruby’s instance_variable_get method to read an instance variable with an arbitrary name, and instance_variable_set to assign a new value to it. Unfortunately the variable name must be prefixed with an “@” sign in both cases—hence the string interpolation.

You might be wondering why we have to use a blank object as the default value for “not provided” and why we can’t simply use nil for that purpose. The reason is simple—nil is a valid value that someone might want to set for a configuration attribute. If we tested for nil, we would not be able to tell these two scenarios apart:

MyApp.configure do
  app_id nil # expectation: assigns nil
  app_id     # expectation: returns current value
end

That blank object stored in not_provided is only ever going to be equal to itself, so this way we can be certain that nobody is going to pass it into our method and cause an unintended read instead of a write.

Adding Support for References

There is one more feature that we could add to make our module even more versatile—the ability to reference a configuration attribute from another one:

MyApp.configure do
  app_id "my_app"
  title "My App"
  cookie_name { "#{app_id}_session" }
End

MyApp.config.cookie_name
=> "my_app_session"

Here we added a reference from cookie_name to the app_id attribute. Note that the expression containing the reference is passed in as a block—this is necessary in order to support the delayed evaluation of the attribute value. The idea is to only evaluate the block later when the attribute is read and not when it is defined—otherwise funny things would happen if we defined the attributes in the “wrong” order:

SomeClass.configure do
  foo "#{bar}_baz"     # expression evaluated here
  bar "hello"
end

SomeClass.config.foo
=> "_baz"              # not actually funny

If the expression is wrapped in a block, that will prevent it from being evaluated right away. Instead, we can save the block to be executed later when the attribute value is retrieved:

SomeClass.configure do
  foo { "#{bar}_baz" }  # stores block, does not evaluate it yet
  bar "hello"
end

SomeClass.config.foo    # `foo` evaluated here
=> "hello_baz"          # correct!

We do not have to make big changes to the Configurable module to add support for delayed evaluation using blocks. In fact, we only have to change the attribute method definition:

define_method attr do |value = not_provided, &block|
  if value === not_provided && block.nil?
    result = instance_variable_get("@#{attr}")
    result.is_a?(Proc) ? instance_eval(&result) : result
  else
    instance_variable_set("@#{attr}", block || value)
  end
end

When setting an attribute, the block || value expression saves the block if one was passed in, or otherwise it saves the value. Then, when the attribute is later read, we check if it is a block and evaluate it using instance_eval if it is, or if it is not a block, we return it like we did before.

Supporting references comes with its own caveats and edge cases, of course. For example, you can probably figure out what happens if you read any of the attributes in this configuration:

SomeClass.configure do
  foo { bar }
  bar { foo }
end

The Finished Module

In the end, we have got ourselves a pretty neat module for making an arbitrary class configurable and then specifying those configuration values using a clean and simple DSL that also lets us reference one configuration attribute from another:

class MyApp
  include Configurable.with(:app_id, :title, :cookie_name)

  # ...
end

SomeClass.configure do
  app_id "my_app"
  title "My App"
  cookie_name { "#{app_id}_session" }
end

Here is the final version of the module that implements our DSL—a total of 36 lines of code:

module Configurable
  def self.with(*attrs)
    not_provided = Object.new

    config_class = Class.new do
      attrs.each do |attr|
        define_method attr do |value = not_provided, &block|
          if value === not_provided && block.nil?
            result = instance_variable_get("@#{attr}")
            result.is_a?(Proc) ? instance_eval(&result) : result
          else
            instance_variable_set("@#{attr}", block || value)
          end
        end
      end

      attr_writer *attrs
    end

    class_methods = Module.new do
      define_method :config do
        @config ||= config_class.new
      end

      def configure(&block)
        config.instance_eval(&block)
      end
    end

    Module.new do
      singleton_class.send :define_method, :included do |host_class|
        host_class.extend class_methods
      end
    end
  end
end

Looking at all this Ruby magic in a piece of code that is nearly unreadable and therefore very hard to maintain, you might wonder if all this effort was worth it just to make our domain specific language a little bit nicer. The short answer is that it depends—which brings us to the final topic of this article.

Ruby DSLs—When to Use and When Not to Use Them

You have probably noticed while reading the implementation steps of our DSL that, as we made the external facing syntax of the language cleaner and easier to use, we had to use an ever increasing number of metaprogramming tricks under the hood to make it happen. This resulted in an implementation that will be incredibly hard to understand and modify in the future. Like so many other things in software development, this is also a tradeoff that must be carefully examined.

For a domain specific language to be worth its implementation and maintenance cost, it must bring an even greater sum of benefits to the table. This is usually achieved by making the language reusable in as many different scenarios as possible, thereby amortizing the total cost between many different use cases. Frameworks and libraries are more likely to contain their own DSLs exactly because they are used by lots of developers, each of whom can enjoy the productivity benefits of those embedded languages.

So, as a general principle, only build DSLs if you, other developers, or the end users of your application will be getting a lot of use out of them. If you do create a DSL, make sure to include a comprehensive test suite with it, as well as properly document its syntax as it can be very hard to figure out from the implementation alone. Future you and your fellow developers will thank you for it.

Feel free to share on social networks. Find the buttons below this post. This opinion article is for informational purposes only.

Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

This is article is written by  Máté Solymosi

Brought to you by Toptal

Edited by Temitope Adelekan

How to Avoid the Curse of Premature Optimization


Optimize 2

It’s almost guarantee-worthy, really. From novices to experts, from architecture down to ASM, and optimizing anything from machine performance to developer performance, chances are quite good that you and your team are short-circuiting your own goals.

What? Me? My team?

That’s a pretty hefty accusation to level. Let me explain.

Optimization is not the holy grail, but it can be just as difficult to obtain. I want to share with you a few simple tips (and a mountain of pitfalls) to help transform your team’s experience from one of self-sabotage to one of harmony, fulfilment, balance, and, eventually, optimization.

What Is Premature Optimization?

Premature optimization is attempting to optimize performance:

  1. When first coding an algorithm
  2. Before benchmarks confirm you need to
  3. Before profiling pinpoints where it makes sense to bother optimizing
  4. At a lower level than your project currently dictates

Now, I’m an optimist, Optimus.

At least, I’m going to pretend to be an optimist while I write this article. For your part, you can pretend your name is Optimus, so this will speak more directly to you.

As someone in tech, you probably sometimes wonder just how it could possibly be $year and yet, despite all our advancement, it’s somehow an acceptable standard for $task to be so annoyingly time-consuming. You want to be lean. Efficient. Awesome. Someone like the Rockstar Programmers whom those job postings are clamouring for, but with leader chops. So when your team writes code, you encourage them to do it right the first time (even if “right” is a highly relative term, here). They know that’s the way of the Clever Coder, and also the way of the Those Who Don’t Need to Waste Time Refactoring Later.

I feel that. The force of perfectionism is sometimes strong within me, too. You want your team to spend a little time now to save a lot of time later, because everyone’s slogged through their share of “Shitty Code Other People Wrote (What the Hell Were They Thinking?).” That’s SCOPWWHWTT for short because I know you like unpronounceable acronyms.

I also know you don’t want your team’s code to be that of themselves or anyone else down the line.

So let’s see what can be done to guide your team in the right direction.

What to Optimize: Welcome to This Being an Art

First of all, when we think of program optimization, often we immediately assume we’re talking about performance. Even that is already vaguer than it may seem (speed? memory usage? etc.) so let’s stop right there.

Let’s make it even more ambiguous! Just at first.

My cobwebby brain likes to create order where possible so it will take every ounce of optimism for me to consider what I’m about to say to be a good thing.

There’s a simple rule of (performance) optimization out there that goes Don’t do it. This sounds quite easy to follow rigidly but not everybody agrees with it. I also don’t agree with it entirely. Some people will simply write better code out of the gate than others. Hopefully, for any given person, the quality of the code they would write in a brand new project will generally improve over time. But I know that, for many programmers, this will not be the case, because the more they know, the more ways they will be tempted to prematurely optimize.

For many programmers… the more they know, the more ways they will be tempted to prematurely optimize.

So this Don’t do it cannot be an exact science but is only meant to counteract the typical techie’s inner urge to solve the puzzle. This, after all, is what attracts many programmers to the craft in the first place. I get that. But ask them to save it, to resist the temptation. If one needs a puzzle-solving outlet right now, one can always dabble in the Sunday paper’s Sudoku, or pick up a Mensa book, or go code golfing with some artificial problem. But leave it out of the repo until the proper time. Almost always this is a wiser path than pre-optimization.

Remember, this practice is notorious enough that people ask whether premature optimization is the root of all evil. (I wouldn’t go quite that far, but I agree with the sentiment.)

I’m not saying we should pick the most braindead way we can think of at every level of design. Of course not. But instead of picking the most clever-looking, we can consider other values:

  1. The easiest to explain to your new hire
  2. The most likely to pass a code review by your most seasoned developer
  3. The most maintainable
  4. The quickest to write
  5. The easiest to test
  6. The most portable
  7. etc.

But here is where the problem shows itself to be difficult. It’s not just about avoiding optimizing for speed, code size, memory footprint, flexibility, or future-proofed-ness. It’s about balance and about whether what you’re doing is actually in line with your values and goals. It’s entirely contextual, and sometimes even impossible to measure objectively.

It’s an art. (C.f. The Art of Computer Programming.)

Why is this a good thing? Because life is like this. It’s messy. Our programming-oriented brains sometimes want to create order in the chaos so badly that we end up ironically multiplying the chaos. It’s like the paradox of trying to force someone to love you. If you think you’ve succeeded at that, it’s no longer love; meanwhile, you’re charged with hostage-taking, you probably need more love than ever, and this metaphor has got to be one of the most awkward I could have picked.

Anyway, if you think you’ve found the perfect system for something, well…enjoy the illusion while it lasts, I guess. It’s okay, failures are wonderful opportunities to learn.

Optimize 1

Keep the UX in Mind

Let’s explore how user experience fits in among these potential priorities. After all, even wanting something to perform well is, at some level, about UX.

If you’re working on a UI, no matter what framework or language the code uses, there will be a certain amount of boilerplate and repetition. It can definitely be valuable in terms of programmer time and code clarity to try to reduce that. To help with the art of balancing priorities, I want to share a couple stories.

At one job, the company I worked for used a closed-source enterprise system that was based on an opinionated tech stack. In fact, it was so opinionated that the vendor who sold it to us refused to make UI customizations that didn’t fit the stack’s opinions because it was so painful for their developers. I’ve never used their stack, so I don’t condemn them for this, but the fact was that this “good for the programmer, bad for the user” trade-off was so cumbersome for my coworkers in certain contexts that I ended up writing a third-party add-on re-implementing this part of the system’s UI. (It was a huge productivity booster. My coworkers loved it! Over a decade later, it’s still saving everyone there time and frustration.)

I’m not saying that opinionation is a problem in itself; just that too much of it became a problem in our case. As a counter example, one of the big draws of Ruby on Rails is precisely that it is opinionated, in a front-end ecosystem where one easily gets vertigo from having too many options. (Give me something with opinions until I can figure out my own!)

In contrast, you may be tempted to crown UX the King of Everything in your project. A worthy goal, but let me tell my second story.

A few years after the success of the above project, one of my coworkers came to me to ask me to optimize the UX by automating a certain messy real-life scenario that sometimes arose so that it could be solved with a single click. I started analyzing whether it was even possible to design an algorithm that wouldn’t have any false positives or negatives because of the many and strange edge cases of the scenario. The more I talked with my coworker about it, the more I realized that the requirements were simply not going to pay off. The scenario only came up once in a while—monthly, let’s say—and currently took one person a few minutes to solve. Even if we could successfully automate it, without any bugs, it would take centuries for the required development and maintenance time to be paid off in terms of time saved by my co-workers. The people-pleaser in me had a difficult moment saying “no,” but I had to cut the conversation short.

So let the computer do what it can to help the user, but only to a sane extent. How do you know what extent that is, though?

Optimize 2

 

An approach I like to take is to profile the UX like your developer’s profile code. Find out from your users where they spend the most time clicking or typing the same thing over and over again, and see if you can optimize those interactions. Can your code make some educated guesses as to what they’re most likely going to input, and make that a no-input default? Aside from certain prohibited contexts (no-click EULA confirmation?) this can really make a difference to your users’ productivity and happiness. Do some hallway usability testing if you can. Sometimes, you may have trouble explaining what’s easy for computers to help with and what isn’t… but overall, this value is likely to be of pretty high importance to your users.

Avoiding Premature Optimization: When and How to Optimize

Our exploration of other contexts notwithstanding, let’s now explicitly assume that we’re optimizing some aspect of raw machine performance for the rest of this article. My suggested approach applies to other targets too, like flexibility, but each target will have its own gotchas; the main point is that prematurely optimizing for anything will probably fail.

So, in terms of performance, what optimization methods are there to actually follow? Let’s dig in.

This Isn’t a Grassroots Initiative, It’s Triple-Eh

The TL;DR is: Work down from the top. Higher-level optimizations can be made earlier in the project, and lower-level ones should be left for later. That’s is all you need to get most of the meaning of the phrase “premature optimization”; doing things out of this order has a high probability of wasting your team’s time and being counter-effective. After all, you don’t write the entire project in machine code from the get-go, do you? So our AAA modus operandi is to optimize in this order:

  1. Architecture
  2. Algorithms
  3. Assembly

Common wisdom has it that algorithms and data structures are often the most effective places to optimize, at least where performance is concerned. Keep in mind, though, that architecture sometimes determines which algorithms and data structures can be used at all.

I once discovered a piece of software doing a financial report by querying an SQL database multiple times for every single financial transaction, then doing a very basic calculation on the client side. It took the small business using the software only a few months of use before even their relatively small amount of financial data meant that, with brand new desktops and a fairly beefy server, the report generation time was already up to several minutes, and this was one they needed to use fairly frequently. I ended up writing a straightforward SQL statement that contained the summing logic—thwarting their architecture by moving the work to the server to avoid all the duplication and network round-trips—and even several years’ worth of data later, my version could generate the same report in mere milliseconds on the same old hardware.

Sometimes you don’t have influence over the architecture of a project because it’s too late in the project for an architecture change to be feasible. Sometimes your developers can skirt around it like I did in the example above. But if you are at the start of a project and have some say in its architecture, now is the time to optimize that.

Optimize 3

In a project, the architecture is the most expensive part to change after the fact, so this is a place where it can make sense to optimize at the beginning. If your app is to deliver data via ostriches, for example, you’ll want to structure it towards low-frequency, high-payload packets to avoid making a bad bottleneck even worse. In this case, you’d better have a full implementation of Tetris to entertain your users, because a loading spinner just isn’t going to cut it. (Kidding aside: Years ago I was installing my first Linux distribution, Corel Linux 2.0, and was delighted that the long-running installation process included just that. Having seen the Windows 95 installer’s infomercial screens so many times that I had memorized them, this was a breath of fresh air at the time.)

As an example of architectural change being expensive, the reason for the aforementioned SQL report being so highly unscalable in the first place is clear from its history. The app had evolved over time, from its roots in MS-DOS and a home-grown, custom database that wasn’t even originally multi-user. When the vendor finally made the switch to SQL, the schema and reporting code seem to have been ported one for one. This left them years’ worth of impressive 1,000%+ performance improvements to sprinkle throughout their updates, whenever they got around to completing the architecture switch by actually making use of SQL’s advantages for a given report. Good for business with locked-in clients like my then-employer, and clearly attempting to prioritize coding efficiency during the initial transition. But meeting clients’ needs, in some cases, about as effectively as a hammer turns a screw.

Architecture is partly about anticipating to what degree your project will need to be able to scale, and in what ways. Because architecture is so high-level, it’s difficult to get concrete with our “dos and don’ts” without narrowing our focus to specific technologies and domains.

I Wouldn’t Call It That, but Everyone Else Does

Thankfully, the Internet is rife with collected wisdom about most every kind of architecture ever dreamt up. When you know it’s time to optimize your architecture, researching pitfalls pretty much boils down to figuring out the buzzword that describes your brilliant vision. Chances are someone has thought along the same lines as you, tried it, failed, iterated, and published about it in a blog or a book.

Buzzword identification can be tricky to accomplish just by searching, because for what you call a FLDSMDFR, someone else already coined the term SCOPWWHWTT, and they describe the same problem you’re solving, but using a completely different vocabulary than you would. Developer communities to the rescue! Hit up StackExchange or HashNode with as thorough a description as you can, plus all the buzzwords that your architecture isn’t, so they know you did sufficient preliminary research. Someone will be glad to enlighten you.

Meanwhile, some general advice might be good food for thought.

Algorithms and Assembly

Given a conducive architecture, here is where the coders on your team will get the most T-bling for their time. The basic avoidance of premature optimization applies here too, but your programmers would do well to consider some of the specifics at this level. There’s so much to think about when it comes to implementation details that I wrote a separate article about code optimization geared toward front-line and senior coders.

But once you and your team have implemented something performance-wise unoptimized, do you really leave it at Don’t do it? You never optimize?

You’re right. The next rule is, for experts only, Don’t do it yet.

Time to Benchmark!

Your code works. Maybe it’s so dog-slow that you already know you will need to optimize, because it’s code that will run often. Maybe you aren’t sure, or you have an O(n) algorithm and figure it’s probably fine. No matter what the case, if this algorithm might ever be worth optimizing, my recommendation at this point is the same: Run a simple benchmark.

Why? Isn’t it clear that my O(n³) algorithm can’t possibly be worse than anything else? Well, for two reasons:

  1. You can add the benchmark to your test suite, as an objective measure of your performance goals, regardless of whether they are currently being met.
  2. Even experts can inadvertently make things slower. Even when it seems obvious. Really obvious.

Don’t believe me on that second point?

How to Get Better Results from $1,400 Hardware Than from $7,000 Hardware

Jeff Atwood of StackOverflow fame once pointed out that it can sometimes (usually, in his opinion) be more cost-effective to just buy better hardware than to spend valuable programmer time on optimization. OK, so suppose you’ve reached a reasonably objective conclusion that your project would fit this scenario. Let’s further assume that what you’re trying to optimize is compilation time, because it’s a hefty Swift project you’re working on, and this has become a rather large developer bottleneck. Hardware shopping time!

What should you buy? Well, obviously, yen for yen, more expensive hardware tends to perform better than cheaper hardware. So obviously, a $7,000 Mac Pro should compile your software faster than some mid-range Mac Mini, right?

Wrong!

It turns out that sometimes more cores mean more efficient compilation… and in this particular case, LinkedIn found out the hard way that the opposite is true for their stack.

But I have seen management that made one mistake further: They didn’t even benchmark before and after, and found that a hardware upgrade didn’t make their software “feel” faster. But there was no way to know for sure; and further, they still had no idea where the bottleneck was, so they remained unhappy regarding performance, having used up the time and money they were willing to allocate to the problem.

OK, I’ve Benchmarked Already. Can I Actually Optimize Yet??

Yes, assuming you’ve decided you need to. But maybe that decision will wait until more/all of the other algorithms are implemented too so you can see how the moving parts fit together and which are most important via profiling. This may be at the app level for a small app, or it may only apply to one subsystem. Either way, remember, a particular algorithm may seem important to the overall app, but even experts—especially experts—are prone to misdiagnosing this.

Think before You Disrupt

“I don’t know about you people, but…”

Optimize 4

As some final food for thought, consider how you can apply the idea of false optimization to a much broader view: your project or company itself, or even a sector of the economy.

I know, it’s tempting to think that technology will save the day and that we can be the heroes who make it happen.

Plus, if we don’t do it, someone else will.

But remember that power corrupts, despite the best of intentions. I won’t link to any particular articles here, but if you haven’t wandered across any, it’s worth seeking some out about the wider impact of disrupting the economy, and who this sometimes ultimately serves. You may be surprised at some of the side-effects of trying to save the world through optimization.

Postscript

Did you notice something, Optimus? The only time I called you Optimus was in the beginning and now at the end. You weren’t called Optimus throughout the article. I’ll be honest, I forgot. I wrote the whole article without calling you Optimus. At the end when I realized I should go back and sprinkle your name throughout the text, a little voice inside me said, don’t do it.

Feel free to share on social networks. Find the buttons below this post. This opinion article is for informational purposes only.

Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

This is article is written by  Kevin Bloch

Brought to you by Toptal

Edited by Temitope Adelekan

How Handwriting Can Make You Smarter and More Productive


Hero-Image_Final-1
Innovations in technology continue to make our lives easier every day.
But in some instances, what we gain in convenience with technology can be outweighed by what we lose in other areas. One such area is handwriting.
While typing on a laptop or mobile device might be convenient, research shows that we lose a lot by not taking notes by hand. Studies have shown writing notes out by hand improves critical thinking, comprehension, and focus.
GetVoIP has put together an infographic that explains these and other benefits and offers proven note-taking methods. Check out the graphic below to learn more.
Benefits-of-Handwriting_Final-2

Feel free to share on social networks. Find the buttons below this post.

Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

 

 

Brought to you by GetVoip

Edited by Temitope Adelekan

Ngrx and Angular 2 Tutorial: Building a Reactive Application


We talk a lot about reactive programming in the Angular realm. Reactive programming and Angular 2 seem to go hand in hand. However, for anyone not familiar with both technologies, it can be quite a daunting task to figure out what it is all about.

In this article, through building a reactive Angular 2 application using Ngrx, you will learn what the pattern is, where the pattern can prove to be useful, and how the pattern can be used to build better Angular 2 applications.

TT

Ngrx is a group of Angular libraries for reactive extensions. Ngrx/Store implements the Redux pattern using the well-known RxJS observables of Angular 2. It provides several advantages by simplifying your application state to plain objects, enforcing unidirectional data flow, and more. The Ngrx/Effects library allows the application to communicate with the outside world by triggering side effects.

What Is Reactive Programming?

Reactive programming is a term that you hear a lot these days, but what does it really mean?

Reactive programming is a way applications handle events and data flow in your applications. In reactive programming, you design your components and other pieces of your software in order to react to those changes instead of asking for changes. This can be a great shift.

A great tool for reactive programming, as you might know, is RxJS.

By providing observables and a lot of operators to transform incoming data, this library will help you handle events in your application. In fact, with observables, you can see the event as a stream of events and not a one-time event. This allows you to combine them, for example, to create a new event to which you will listen.

Reactive programming is a shift in the way you communicate between different parts of an application. Instead of pushing data directly to the component or service that needed it, in reactive programming, it is the component or service that reacts to data changes.

A Word about Ngrx

In order to understand the application you will build in this tutorial, you must make a quick dive into the core Redux concepts.

Store

The store can be seen as your client side database but, more importantly, it reflects the state of your application. You can see it as the single source of truth.

It is the only thing you alter when you follow the Redux pattern and you modify by dispatching actions to it.

Reducer

Reducers are the functions that know what to do with a given action and the previous state of your app.

The reducers will take the previous state from your store and apply a pure function to it. Pure means that the function always returns the same value for the same input and that it has no side effects. From the result of that pure function, you will have a new state that will be put in your store.

Actions

Actions are the payload that contains needed information to alter your store. Basically, an action has a type and a payload that your reducer function will take to alter the state.

Dispatcher

Dispatchers are simply an entry point for you to dispatch your action. In Ngrx, there is a dispatch method directly on the store.

Middleware

Middleware is some functions that will intercept each action that is being dispatched in order to create side effects, even though you will not use them in this article. They are implemented in the Ngrx/Effect library, and there is a big chance that you will need them while building real-world applications.

Why Use Ngrx?

Complexity

The store and unidirectional data flow greatly reduce coupling between parts of your application. This reduced coupling reduces the complexity of your application since each part only cares about specific states.

Tooling

Watch the video above or simply click here to watch.

The entire state of your application is stored in one place, so it is easy to have a global view of your application state and helps during development. Also, with Redux comes a lot of nice dev tools that take advantage of the store and can help to reproduce a certain state of the application or make time travel, for example.

Architectural simplicity

Many of the benefits of Ngrx are achievable with other solutions; after all, Redux is an architectural pattern. But when you have to build an application that is a great fit for the Redux pattern, such as collaborative editing tools, you can easily add features by following the pattern.

Although you don’t have to think about what you are doing, adding some things like analytics through all your applications becomes trivial since you can track all the actions that are dispatched.

Small learning curve

Since this pattern is so widely adopted and simple, it is really easy for new people in your team to catch up quickly on what you did.

Ngrx shines the most when you have a lot of external actors that can modify your application, such as a monitoring dashboard. In those cases, it is hard to manage all the incoming data that are pushed to your application, and state management becomes hard. That is why you want to simplify it with an immutable state, and this is one thing that the Ngrx store provides us with.

Building an Application with Ngrx

The power of Ngrx shines the most when you have outside data that is being pushed to our application in real time. With that in mind, let’s build a simple freelancer grid that shows online freelancers and allows you to filter through them.

Setting Up the Project

Angular CLI is an awesome tool that greatly simplifies the setup process. You may want to not use it but keep in mind that the rest of this article will use it.

npm install -g @angular/cli

Next, you want to create a new application and install all Ngrx libraries:

ng new toptal-freelancers
npm install ngrx --save

Freelancers Reducer

Reducers are a core piece of the Redux architecture, so why not start with them first while building the application?

First, create a “freelancers” reducer that will be responsible for creating our new state each time an action is dispatched to the store.

freelancer-grid/freelancers.reducer.ts

import { Action } from '@ngrx/store';

export interface AppState {
    freelancers : Array<IFreelancer>
}

export interface IFreelancer {
    name: string,
    email: string,
    thumbnail: string
}

export const ACTIONS = {
    FREELANCERS_LOADED: 'FREELANCERS_LOADED',
}

export function freelancersReducer(
    state: Array<IFreelancer> = [],
    action: Action): Array<IFreelancer> {
    switch (action.type) {
        case ACTIONS.FREELANCERS_LOADED:
            // Return the new state with the payload as freelancers list
            return Array.prototype.concat(action.payload);
        default:
            return state;
    }
}

So here is our freelancers’ reducer.

This function will be called each time an action is dispatched through the store. If the action is,FREELANCERS_LOADED it will create a new array from the action payload. If it is not, it will return the old state reference and nothing will be appended.

It is important to note here that, if the old state reference is returned, the state will be considered unchanged. This means that if you call a state.push(something)the state will not be considered to have changed. Keep that in mind while doing your reducer functions.

States are immutable. A new state must be returned each time it changes.

Freelancer Grid Component

Create a grid component to show our online freelancers. At first, it will only reflect what is in the store.

ng generate component freelancer-grid

Put the following in freelancer-grid.component.ts

import { Component, OnInit } from '@angular/core';
import { Store } from '@ngrx/store';
import { AppState, IFreelancer, ACTIONS } from './freelancer-reducer';
import * as Rx from 'RxJS';

@Component({
  selector: 'app-freelancer-grid',
  templateUrl: './freelancer-grid.component.html',
  styleUrls: ['./freelancer-grid.component.scss'],
})
export class FreelancerGridComponent implements OnInit {
  public freelancers: Rx.Observable<Array<IFreelancer>>;

  constructor(private store: Store<AppState>) {
    this.freelancers = store.select('freelancers');
  }

}

And the following in freelancer-grid.component.html:

<span class="count">Number of freelancers online: {{(freelancers | async).length}}</span>
<div class="freelancer fade thumbail" *ngFor="let freelancer of freelancers | async">
    <button type="button" class="close" aria-label="Close" (click)="delete(freelancer)"><span aria-hidden="true">&times;</span></button><br>
    <img class="img-circle center-block" src="{{freelancer.thumbnail}}" /><br>
    <div class="info"><span><strong>Name: </strong>{{freelancer.name}}</span>
        <span><strong>Email: </strong>{{freelancer.email}}</span></div>
    <a class="btn btn-default">Hire {{freelancer.name}}</a>
</div>

So what did you just do?

First, you have created a new component called freelancer-grid.

The component contains a property named freelancers that is a part of the application state contained in the Ngrx store. By using the select operator, you choose to only be notified by the freelancers property of the overall application state. So now each time the freelancers property of the application state changes, your observable will be notified.

One thing that is beautiful with this solution is that your component has only one dependency, and it is the store that makes your component much less complex and easily reusable.

On the template part, you did nothing too complex. Notice the use of async pipe in the *ngFor. The freelancers observable is not directly iterable, but thanks to Angular, we have the tools to unwrap it and bind the dom to its value by using the async pipe. This makes working with the observable so much easier.

Adding the Remove Freelancers Functionality

Now that you have a functional base, let’s add some actions to the application.

You want to be able to remove a freelancer from the state. According to how Redux works, you need to first define that action in each state that is affected by it.

In this case, it is only the freelancers reducer:

export const ACTIONS = {
    FREELANCERS_LOADED: 'FREELANCERS_LOADED',
    DELETE_FREELANCER: 'DELETE_FREELANCER',
}

export function freelancersReducer(
    state: Array<IFreelancer> = [],
    action: Action): Array<IFreelancer> {
    switch (action.type) {
        case ACTIONS.FREELANCERS_LOADED:
            // Return the new state with the payload as freelancers list
            return Array.prototype.concat(action.payload);
        case ACTIONS.DELETE_FREELANCER:
            // Remove the element from the array
            state.splice(state.indexOf(action.payload), 1);
            // We need to create another reference
            return Array.prototype.concat(state);
       default:
            return state;
    }
}

It is really important here to create a new array from the old one in order to have a new immutable state.

Now, you can add a delete freelancers function to your component that will dispatch this action to the store:

delete(freelancer) {
    this.store.dispatch({
      type: ACTIONS.DELETE_FREELANCER,
      payload: freelancer,
    })
  }

Doesn’t that look simple?

You can now remove a specific freelancer from the state, and that change will propagate through your application.

Now, what if you add another component to the application to see how they can interact with each other through the store?

Filter Reducer

As always, let’s start with the reducer. For that component, it is quite simple. You want the reducer to always return a new state with only the property that we dispatched. It should look like this:

import { Action } from '@ngrx/store';

export interface IFilter {
    name: string,
    email: string,
}

export const ACTIONS = {
    UPDATE_FITLER: 'UPDATE_FITLER',
    CLEAR_FITLER: 'CLEAR_FITLER',
}

const initialState = { name: '', email: '' };

export function filterReducer(
    state: IFilter = initialState,
    action: Action): IFilter {
    switch (action.type) {
        case ACTIONS.UPDATE_FITLER:
            // Create a new state from payload
            return Object.assign({}, action.payload);
        case ACTIONS.CLEAR_FITLER:
            // Create a new state from initial state
            return Object.assign({}, initialState);
        default:
            return state;
    }
}

Filter Component

import { Component, OnInit } from '@angular/core';
import { IFilter, ACTIONS as FilterACTIONS } from './filter-reducer';
import { Store } from '@ngrx/store';
import { FormGroup, FormControl } from '@angular/forms';
import * as Rx from 'RxJS';

@Component({
  selector: 'app-filter',
  template: 
    '<form class="filter">'+
    '<label>Name</label>'+
    '<input type="text" [formControl]="name" name="name"/>'+
    '<label>Email</label>'+
    '<input type="text" [formControl]="email" name="email"/>'+
    '<a (click)="clearFilter()" class="btn btn-default">Clear Filter</a>'+
    '</form>',
  styleUrls: ['./filter.component.scss'],
})
export class FilterComponent implements OnInit {

  public name = new FormControl();
  public email = new FormControl();
  constructor(private store: Store<any>) {
    store.select('filter').subscribe((filter: IFilter) => {
      this.name.setValue(filter.name);
      this.email.setValue(filter.email);
    })
    Rx.Observable.merge(this.name.valueChanges, this.email.valueChanges).debounceTime(1000).subscribe(() => this.filter());
  }

  ngOnInit() {
  }

  filter() {
    this.store.dispatch({
      type: FilterACTIONS.UPDATE_FITLER,
      payload: {
        name: this.name.value,
        email: this.email.value,
      }
    });
  }

  clearFilter() {
    this.store.dispatch({
      type: FilterACTIONS.CLEAR_FITLER,
    })
  }

}

First, you have made a simple template that includes a form with two fields (name and email) that reflects our state.

You keep those fields in sync with state quite a bit differently than what you did with the freelancers state. In fact, as you have seen, you subscribed to the filter state, and each time, it triggers you assign the new value to the formControl.

One thing that is nice with Angular 2 is that it provides you with a lot of tools to interact with observables.

You have seen the async pipe earlier, and now you see the formControl class that allows you to have an observable on the value of an input. This allows fancy things like what you did in the filter component.

As you can see, you use Rx.observable.merge to combine the two observables given by your formControls, and then you debounce that new observable before triggering the filter function.

In simpler words, you wait one second after either of the name or email formControl have changed and then call the filter function.

Isn’t that awesome?

All of that is done in a few lines of code. This is one of the reasons why you will love RxJS. It allows you to do a lot of those fancy things easily that would have been more complicated otherwise.

Now let’s step to that filter function. What does it do?

It simply dispatches the UPDATE_FILTER action with the value of the name and the email, and the reducer takes care of altering the state with that information.

Let’s move on to something more interesting.

How do you make that filter interact with your previously created freelancer grid?

Simple. You only have to listen to the filter part of the store. Let’s see what the code looks like.

import { Component, OnInit } from '@angular/core';
import { Store } from '@ngrx/store';
import { AppState, IFreelancer, ACTIONS } from './freelancer-reducer';
import { IFilter, ACTIONS as FilterACTIONS } from './../filter/filter-reducer';
import * as Rx from 'RxJS';

@Component({
  selector: 'app-freelancer-grid',
  templateUrl: './freelancer-grid.component',
  styleUrls: ['./freelancer-grid.component.scss'],
})
export class FreelancerGridComponent implements OnInit {
  public freelancers: Rx.Observable<Array<IFreelancer>>;
  public filter: Rx.Observable<IFilter>;

  constructor(private store: Store<AppState>) {
    this.freelancers = Rx.Observable.combineLatest(store.select('freelancers'), store.select('filter'), this.applyFilter);
  }

  applyFilter(freelancers: Array<IFreelancer>, filter: IFilter): Array<IFreelancer> {
    return freelancers
      .filter(x => !filter.name || x.name.toLowerCase().indexOf(filter.name.toLowerCase()) !== -1)
      .filter(x => !filter.email || x.email.toLowerCase().indexOf(filter.email.toLowerCase()) !== -1)
  }

  ngOnInit() {
  }

  delete(freelancer) {
    this.store.dispatch({
      type: ACTIONS.DELETE_FREELANCER,
      payload: freelancer,
    })
  }

}

It is no more complicated than that.

Once again, you used the power of RxJS to combine the filter and freelancers state.

In fact, combineLatest will fire if one of the two observables fire and then combine each state using the applyFilter function. It returns a new observable that do so. We don’t have to change any other lines of code.

Notice how the component does not care about how the filter is obtained, modified, or stored; it only listens to it as it would do for any other state. We just added the filter functionality and we did not add any new dependencies.

Making It Shine

Remember that the use of Ngrx really shines when we have to deal with real time data? Let’s add that part to our application and see how it goes.

Introducing the freelancers-service.

ng generate service freelancer

The freelancer service will simulate real time operation on data and should look like this.

import { Injectable } from '@angular/core';
import { Store } from '@ngrx/store';
import { AppState, IFreelancer, ACTIONS } from './freelancer-grid/freelancer-reducer';
import { Http, Response } from '@angular/http';

@Injectable()
export class RealtimeFreelancersService {

  private USER_API_URL = 'https://randomuser.me/api/?results='

  constructor(private store: Store<AppState>, private http: Http) { }

  private toFreelancer(value: any) {
    return {
      name: value.name.first + ' ' + value.name.last,
      email: value.email,
      thumbail: value.picture.large,
    }
  }

  private random(y) {
    return Math.floor(Math.random() * y);
  }

  public run() {
    this.http.get(`${this.USER_API_URL}51`).subscribe((response) => {
      this.store.dispatch({
        type: ACTIONS.FREELANCERS_LOADED,
        payload: response.json().results.map(this.toFreelancer)
      })
    })

    setInterval(() => {
      this.store.select('freelancers').first().subscribe((freelancers: Array<IFreelancer>) => {
        let getDeletedIndex = () => {
          return this.random(freelancers.length - 1)
        }
        this.http.get(`${this.USER_API_URL}${this.random(10)}`).subscribe((response) => {
          this.store.dispatch({
            type: ACTIONS.INCOMMING_DATA,
            payload: {
              ADD: response.json().results.map(this.toFreelancer),
              DELETE: new Array(this.random(6)).fill(0).map(() => getDeletedIndex()),
            }
          });
          this.addFadeClassToNewElements();
        });
      });
    }, 10000);
  }

  private addFadeClassToNewElements() {
    let elements = window.document.getElementsByClassName('freelancer');
    for (let i = 0; i < elements.length; i++) {
      if (elements.item(i).className.indexOf('fade') === -1) {
        elements.item(i).classList.add('fade');
      }
    }
  }
}

This service is not perfect, but it does what it does and, for demo purposes, it allows us to demonstrate a few things.

First, this service is quite simple. It queries a user API and pushes the results to the store. It is a no-brainer, and you don’t have to think about where the data goes. It goes to the store, which is something that makes Redux so useful and dangerous at the same time—but we will come back to this later. After every ten seconds, the service picks a few freelancers and sends an operation to delete them along with an operation to a few other freelancers.

If we want our reducer to be able to handle it, we need to modify it:

import { Action } from '@ngrx/store';

export interface AppState {
    freelancers : Array<IFreelancer>
}

export interface IFreelancer {
    name: string,
    email: string,
}

export const ACTIONS = {
    LOAD_FREELANCERS: 'LOAD_FREELANCERS',
    INCOMMING_DATA: 'INCOMMING_DATA',
    DELETE_FREELANCER: 'DELETE_FREELANCER',
}

export function freelancersReducer(
    state: Array<IFreelancer> = [],
    action: Action): Array<IFreelancer> {
    switch (action.type) {
        case ACTIONS.INCOMMING_DATA:
            action.payload.DELETE.forEach((index) => {
                state.splice(state.indexOf(action.payload), 1);
            })
            return Array.prototype.concat(action.payload.ADD, state);
        case ACTIONS.FREELANCERS_LOADED:
            // Return the new state with the payload as freelancers list
            return Array.prototype.concat(action.payload);
        case ACTIONS.DELETE_FREELANCER:
            // Remove the element from the array
            state.splice(state.indexOf(action.payload), 1);
            // We need to create another reference
            return Array.prototype.concat(state);
        default:
            return state;
    }
}

Now we are able to handle such operations.

One thing that is demonstrated in that service is that, of all the process of state changes being done synchronously, it is quite important to notice that. If the application of the state was async, the call on this.addFadeClassToNewElements(); would not work as the DOM element would not be created when this function is called.

Personally, I find that quite useful, since it improves predictability.

Building Applications, the Reactive Way

Through this tutorial, you have built a reactive application using Ngrx, RxJS, and Angular 2.

As you have seen, these are powerful tools. What you have built here can also be seen as the implementation of a Redux architecture, and Redux is powerful in itself. However, it also has some constraints. While we use Ngrx, those constraints inevitably reflect in the part of our application that we use.

TT1

The diagram above is a rough of the architecture you just did.

You may notice that even if some components are influencing each other, they are independent of each other. This is a peculiarity of this architecture: Components share a common dependency, which is the store.

Another particular thing about this architecture is that we don’t call functions but dispatch actions. An alternative to Ngrx could be to only make a service that manages a particular state with observables of your applications and call functions on that service instead of actions. This way, you could get centralization and reactiveness of the state while isolating the problematic state. This approach can help you to reduce the overhead of creating a reducer and describe actions as plain objects.

When you feel like the state of your application is being updated from different sources and it starts to become a mess, Ngrx is what you need.

Feel free to share on social networks. Find the buttons below this post. This opinion article is for informational purposes only.

Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

This is article is written by  Simon Boissonneault-Robert

Brought to you by Toptal

Edited by Temitope Adelekan

Get Started With Microservices: A Dropwizard Tutorial


We’re all witnessing a rise in the popularity of microservice architectures. In a microservice architecture, Dropwizard commands a very important place. It is a framework for building RESTful web services or, more precisely, a set of tools and frameworks for building RESTful web services.

It allows developers quick project bootstrapping. This helps you package your applications to be easily deployable in a production environment as standalone services. If you have ever been in a situation where you need to bootstrap a project in the Spring framework, for example, you probably know how painful it can be.

toptal-blog-image-1498044832305-7c1db9b310998f5b0a11486a2a6dcb32

In this blog, I will guide you through the complete process of writing a simple Dropwizard RESTful service. After we’re done, we will have a service for basic CRUD operations on “parts.” It doesn’t really matter what “part” is; it can be anything. It just came to mind first.

We will store the data in a MySQL database, using JDBI for querying it, and will use following endpoints:

  • GET /parts -to retrieve all parts from DB
  • GET /part/{id} to get a particular part from DB
  • POST /parts -to create a new part
  • PUT /parts/{id} -to edit an existing part
  • DELETE /parts/{id} -to delete the part from a DB

We will use OAuth to authenticate our service, and finally, add some unit tests to it.

Default Dropwizard Libraries

Instead of including all libraries needed to build a REST service separately and configuring each of them, Dropwizard does that for us. Here is the list of libraries that come with Dropwizard by default:

  • Jetty: You would require HTTP for running a web application. Dropwizard embeds the Jetty servlet container for running web applications. Instead of deploying your applications to an application server or web server, Dropwizard defines a main method that invokes the Jetty server as a standalone process. As of now, Dropwizard recommends only running the application with Jetty; other web services like Tomcat are not officially supported.
  • Jersey: Jersey is one of the best REST API implementations on the market. Also, it follows the standard JAX-RS specification, and it’s the reference implementation for the JAX-RS specification. Dropwizard uses Jersey as the default framework for building RESTful web applications.
  • Jackson: Jackson is the de facto standard for JSON format handling. It is one of the best object mapper APIs for the JSON format.
  • Metrics: Dropwizard has its own metrics module for exposing the application metrics through HTTP endpoints.
  • Guava: In addition to highly optimized immutable data structures, Guava provides a growing number of classes to speed up development in Java.
  • Logback and Slf4j: These two are used for better logging mechanisms.
  • Freemarker and Mustache: Choosing template engines for your application is one of the key decisions. The chosen template engine has to be more flexible for writing better scripts. Dropwizard uses well-known and popular template engines Freemarker and Mustache for building the user interfaces.

Apart from the above list, there are many other libraries like Joda Time, Liquibase, Apache HTTP Client, and Hibernate Validator used by Dropwizard for building REST services.

Maven Configuration

Dropwizard officially supports Maven. Even if you can use other build tools, most of the guides and documentation uses Maven, so we’re going to use it too here. If you’re not familiar with Maven, you could check out this Maven tutorial.

This is the first step in creating your Dropwizard application. Please add the following entry in your Maven’s pom.xml file:

<dependencies>
  <dependency>
    <groupId>io.dropwizard</groupId>
    <artifactId>dropwizard-core</artifactId>
    <version>${dropwizard.version}</version>
  </dependency>
</dependencies>

Before adding the above entry, you could add the dropwizard.version as below:

<properties>
  <dropwizard.version>1.1.0</dropwizard.version>
</properties>

That’s it. You’re done writing the Maven configuration. This will download all the required dependencies to your project. The current Dropwizard version is 1.1.0 so we will be using it this guide.

Now, we can move on to writing our first real Dropwizard application.

Define Configuration Class

Dropwizard stores configurations in YAML files. You will need to have the file configuration.yml in your application root folder. This file will then be deserialized to an instance of your application’s configuration class and validated. Your application’s configuration file is the subclass of the Dropwizard’s configuration class (io.dropwizard.Configuration).

Let’s create a simple configuration class:

import javax.validation.Valid;
import javax.validation.constraints.NotNull;

import com.fasterxml.jackson.annotation.JsonProperty;

import io.dropwizard.Configuration;
import io.dropwizard.db.DataSourceFactory;

public class DropwizardBlogConfiguration extends Configuration {
  private static final String DATABASE = "database";

  @Valid
  @NotNull
  private DataSourceFactory dataSourceFactory = new DataSourceFactory();

  @JsonProperty(DATABASE)
  public DataSourceFactory getDataSourceFactory() {
    return dataSourceFactory;
  }

  @JsonProperty(DATABASE)
  public void setDataSourceFactory(final DataSourceFactory dataSourceFactory) {
    this.dataSourceFactory = dataSourceFactory;
  }
}

The YAML configuration file would look like this:

database:
  driverClass: com.mysql.cj.jdbc.Driver
  url: jdbc:mysql://localhost/dropwizard_blog
  user: dropwizard_blog
  password: dropwizard_blog 
  maxWaitForConnection: 1s
  validationQuery: "SELECT 1"
  validationQueryTimeout: 3s
  minSize: 8
  maxSize: 32
  checkConnectionWhileIdle: false
  evictionInterval: 10s
  minIdleTime: 1 minute
  checkConnectionOnBorrow: true

The above class will be deserialized from the YAML file and put the values from the YAML file to this object.

Define an Application Class

We should now go and create the main application class. This class will bring all the bundles together and bring the application up and get it running for use.

Here is an example of an application class in Dropwizard:

import io.dropwizard.Application;
import io.dropwizard.auth.AuthDynamicFeature;
import io.dropwizard.auth.oauth.OAuthCredentialAuthFilter;
import io.dropwizard.setup.Environment;

import javax.sql.DataSource;

import org.glassfish.jersey.server.filter.RolesAllowedDynamicFeature;
import org.skife.jdbi.v2.DBI;

import com.toptal.blog.auth.DropwizardBlogAuthenticator;
import com.toptal.blog.auth.DropwizardBlogAuthorizer;
import com.toptal.blog.auth.User;
import com.toptal.blog.config.DropwizardBlogConfiguration;
import com.toptal.blog.health.DropwizardBlogApplicationHealthCheck;
import com.toptal.blog.resource.PartsResource;
import com.toptal.blog.service.PartsService;

public class DropwizardBlogApplication extends Application<DropwizardBlogConfiguration> {
  private static final String SQL = "sql";
  private static final String DROPWIZARD_BLOG_SERVICE = "Dropwizard blog service";
  private static final String BEARER = "Bearer";

  public static void main(String[] args) throws Exception {
    new DropwizardBlogApplication().run(args);
  }

  @Override
  public void run(DropwizardBlogConfiguration configuration, Environment environment) {
    // Datasource configuration
    final DataSource dataSource =
        configuration.getDataSourceFactory().build(environment.metrics(), SQL);
    DBI dbi = new DBI(dataSource);

    // Register Health Check
    DropwizardBlogApplicationHealthCheck healthCheck =
        new DropwizardBlogApplicationHealthCheck(dbi.onDemand(PartsService.class));
    environment.healthChecks().register(DROPWIZARD_BLOG_SERVICE, healthCheck);

    // Register OAuth authentication
    environment.jersey()
        .register(new AuthDynamicFeature(new OAuthCredentialAuthFilter.Builder<User>()
            .setAuthenticator(new DropwizardBlogAuthenticator())
            .setAuthorizer(new DropwizardBlogAuthorizer()).setPrefix(BEARER).buildAuthFilter()));
    environment.jersey().register(RolesAllowedDynamicFeature.class);

    // Register resources
    environment.jersey().register(new PartsResource(dbi.onDemand(PartsService.class)));
  }
}

What’s actually done above is override the Dropwizard run method. In this method, we’re instantiating a DB connection, registering our custom health check (we’ll talk about it later), initializing OAuth authentication for our service, and finally, registering a Dropwizard resource.

All of these will be explained later on.

Define a Representation Class

Now we have to start thinking about our REST API and what will be the representation of our resource. We have to design the JSON format and the corresponding representation class that converts to the desired JSON format.

Let’s look at the sample JSON format for this simple representation class example:

{
  "code": 200,
  "data": {
    "id": 1,
    "name": "Part 1",
    "code": "PART_1_CODE"
  }
}

For the above JSON format, we would create the representation class as below:

import org.hibernate.validator.constraints.Length;

import com.fasterxml.jackson.annotation.JsonProperty;

public class Representation<T> {
  private long code;

  @Length(max = 3)
  private T data;

  public Representation() {
    // Jackson deserialization
  }

  public Representation(long code, T data) {
    this.code = code;
    this.data = data;
  }

  @JsonProperty
  public long getCode() {
    return code;
  }

  @JsonProperty
  public T getData() {
    return data;
  }
}

This is fairly simple POJO.

Defining a Resource Class

A resource is what REST services are all about. It is nothing but an endpoint URI for accessing the resource on the server. In this example, we’ll have a resource class with few annotations for request URI mapping. Since Dropwizard uses the JAX-RS implementation, we will define the URI path using the @Path annotation.

Here is a resource class for our Dropwizard example:

import java.util.List;

import javax.annotation.security.RolesAllowed;
import javax.validation.Valid;
import javax.validation.constraints.NotNull;
import javax.ws.rs.DELETE;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.PUT;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

import org.eclipse.jetty.http.HttpStatus;

import com.codahale.metrics.annotation.Timed;
import com.toptal.blog.model.Part;
import com.toptal.blog.representation.Representation;
import com.toptal.blog.service.PartsService;

@Path("/parts")
@Produces(MediaType.APPLICATION_JSON)
@RolesAllowed("ADMIN")
public class PartsResource {
  private final PartsService partsService;;

  public PartsResource(PartsService partsService) {
    this.partsService = partsService;
  }

  @GET
  @Timed
  public Representation<List<Part>> getParts() {
    return new Representation<List<Part>>(HttpStatus.OK_200, partsService.getParts());
  }

  @GET
  @Timed
  @Path("{id}")
  public Representation<Part> getPart(@PathParam("id") final int id) {
    return new Representation<Part>(HttpStatus.OK_200, partsService.getPart(id));
  }

  @POST
  @Timed
  public Representation<Part> createPart(@NotNull @Valid final Part part) {
    return new Representation<Part>(HttpStatus.OK_200, partsService.createPart(part));
  }

  @PUT
  @Timed
  @Path("{id}")
  public Representation<Part> editPart(@NotNull @Valid final Part part,
      @PathParam("id") final int id) {
    part.setId(id);
    return new Representation<Part>(HttpStatus.OK_200, partsService.editPart(part));
  }

  @DELETE
  @Timed
  @Path("{id}")
  public Representation<String> deletePart(@PathParam("id") final int id) {
    return new Representation<String>(HttpStatus.OK_200, partsService.deletePart(id));
  }
}

You can see all of the endpoints are actually defined in this class.

Registering a Resource

I would go back now to the main application class. You can see at the end of that class that we have registered our resource to be initialized with the service run. We need to do so with all resources we might have in our application. This is the code snippet responsible for that:

// Register resources
    environment.jersey().register(new PartsResource(dbi.onDemand(PartsService.class)));

Service Layer

For proper exception handling and the ability to be independent of the data storage engine, we will introduce a “mid-layer” service class. This is the class we will be calling from our resource layer, and we don’t care what is underlying. That’s why we have this layer in between of resource and DAO layers. Here is our service class:

import java.util.List;
import java.util.Objects;

import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.Response.Status;

import org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException;
import org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException;
import org.skife.jdbi.v2.sqlobject.CreateSqlObject;

import com.toptal.blog.dao.PartsDao;
import com.toptal.blog.model.Part;

public abstract class PartsService {
  private static final String PART_NOT_FOUND = "Part id %s not found.";
  private static final String DATABASE_REACH_ERROR =
      "Could not reach the MySQL database. The database may be down or there may be network connectivity issues. Details: ";
  private static final String DATABASE_CONNECTION_ERROR =
      "Could not create a connection to the MySQL database. The database configurations are likely incorrect. Details: ";
  private static final String DATABASE_UNEXPECTED_ERROR =
      "Unexpected error occurred while attempting to reach the database. Details: ";
  private static final String SUCCESS = "Success...";
  private static final String UNEXPECTED_ERROR = "An unexpected error occurred while deleting part.";

  @CreateSqlObject
  abstract PartsDao partsDao();

  public List<Part> getParts() {
    return partsDao().getParts();
  }

  public Part getPart(int id) {
    Part part = partsDao().getPart(id);
    if (Objects.isNull(part)) {
      throw new WebApplicationException(String.format(PART_NOT_FOUND, id), Status.NOT_FOUND);
    }
    return part;
  }

  public Part createPart(Part part) {
    partsDao().createPart(part);
    return partsDao().getPart(partsDao().lastInsertId());
  }

  public Part editPart(Part part) {
    if (Objects.isNull(partsDao().getPart(part.getId()))) {
      throw new WebApplicationException(String.format(PART_NOT_FOUND, part.getId()),
          Status.NOT_FOUND);
    }
    partsDao().editPart(part);
    return partsDao().getPart(part.getId());
  }

  public String deletePart(final int id) {
    int result = partsDao().deletePart(id);
    switch (result) {
      case 1:
        return SUCCESS;
      case 0:
        throw new WebApplicationException(String.format(PART_NOT_FOUND, id), Status.NOT_FOUND);
      default:
        throw new WebApplicationException(UNEXPECTED_ERROR, Status.INTERNAL_SERVER_ERROR);
    }
  }

  public String performHealthCheck() {
    try {
      partsDao().getParts();
    } catch (UnableToObtainConnectionException ex) {
      return checkUnableToObtainConnectionException(ex);
    } catch (UnableToExecuteStatementException ex) {
      return checkUnableToExecuteStatementException(ex);
    } catch (Exception ex) {
      return DATABASE_UNEXPECTED_ERROR + ex.getCause().getLocalizedMessage();
    }
    return null;
  }

  private String checkUnableToObtainConnectionException(UnableToObtainConnectionException ex) {
    if (ex.getCause() instanceof java.sql.SQLNonTransientConnectionException) {
      return DATABASE_REACH_ERROR + ex.getCause().getLocalizedMessage();
    } else if (ex.getCause() instanceof java.sql.SQLException) {
      return DATABASE_CONNECTION_ERROR + ex.getCause().getLocalizedMessage();
    } else {
      return DATABASE_UNEXPECTED_ERROR + ex.getCause().getLocalizedMessage();
    }
  }

  private String checkUnableToExecuteStatementException(UnableToExecuteStatementException ex) {
    if (ex.getCause() instanceof java.sql.SQLSyntaxErrorException) {
      return DATABASE_CONNECTION_ERROR + ex.getCause().getLocalizedMessage();
    } else {
      return DATABASE_UNEXPECTED_ERROR + ex.getCause().getLocalizedMessage();
    }
  }
}

The last part of it is actually a health check implementation, which we will be talking about later.

DAO layer, JDBI, and Mapper

Dropwizard supports JDBI and Hibernate. It’s separate Maven module, so let’s first add it as a dependency as well as the MySQL connector:

<dependency>
  <groupId>io.dropwizard</groupId>
  <artifactId>dropwizard-jdbi</artifactId>
  <version>${dropwizard.version}</version>
</dependency>
<dependency>
  <groupId>mysql</groupId>
  <artifactId>mysql-connector-java</artifactId>
  <version>${mysql.connector.version}</version>
</dependency>

For a simple CRUD service, I personally prefer JDBI, as it is simpler and much faster to implement. I have created a simple MySQL schema with one table only to be used in our example. You can find the init script for the schema within the source. JDBI offers simple query writing by using annotations such as @SqlQuery for reading and @SqlUpdate for writing data. Here is our DAO interface:

import java.util.List;

import org.skife.jdbi.v2.sqlobject.Bind;
import org.skife.jdbi.v2.sqlobject.BindBean;
import org.skife.jdbi.v2.sqlobject.SqlQuery;
import org.skife.jdbi.v2.sqlobject.SqlUpdate;
import org.skife.jdbi.v2.sqlobject.customizers.RegisterMapper;

import com.toptal.blog.mapper.PartsMapper;
import com.toptal.blog.model.Part;

@RegisterMapper(PartsMapper.class)
public interface PartsDao {

  @SqlQuery("select * from parts;")
  public List<Part> getParts();

  @SqlQuery("select * from parts where id = :id")
  public Part getPart(@Bind("id") final int id);

  @SqlUpdate("insert into parts(name, code) values(:name, :code)")
  void createPart(@BindBean final Part part);

  @SqlUpdate("update parts set name = coalesce(:name, name), code = coalesce(:code, code) where id = :id")
  void editPart(@BindBean final Part part);

  @SqlUpdate("delete from parts where id = :id")
  int deletePart(@Bind("id") final int id);

  @SqlQuery("select last_insert_id();")
  public int lastInsertId();
}

As you can see, it’s fairly simple. However, we need to map our SQL result sets to a model, which we do by registering a mapper class. Here is our mapper class:

import java.sql.ResultSet;
import java.sql.SQLException;

import org.skife.jdbi.v2.StatementContext;
import org.skife.jdbi.v2.tweak.ResultSetMapper;

import com.toptal.blog.model.Part;

public class PartsMapper implements ResultSetMapper<Part> {
  private static final String ID = "id";
  private static final String NAME = "name";
  private static final String CODE = "code";

  public Part map(int i, ResultSet resultSet, StatementContext statementContext)
      throws SQLException {
    return new Part(resultSet.getInt(ID), resultSet.getString(NAME), resultSet.getString(CODE));
  }
}

And our model:

import org.hibernate.validator.constraints.NotEmpty;

public class Part {
  private int id;
  @NotEmpty
  private String name;
  @NotEmpty
  private String code;

  public int getId() {
    return id;
  }

  public void setId(int id) {
    this.id = id;
  }

  public String getName() {
    return name;
  }

  public void setName(String name) {
    this.name = name;
  }

  public String getCode() {
    return code;
  }

  public void setCode(String code) {
    this.code = code;
  }

  public Part() {
    super();
  }

  public Part(int id, String name, String code) {
    super();
    this.id = id;
    this.name = name;
    this.code = code;
  }
}

Dropwizard Health Check

Dropwizard offers native support for health checking. In our case, we would probably like to check if the database is up and running before saying that our service is healthy. What we do is actually perform some simple DB action like getting parts from the DB and handling the potential outcomes (successful or exceptions).

Here is our health check implementation in Dropwizard:

import com.codahale.metrics.health.HealthCheck;
import com.toptal.blog.service.PartsService;

public class DropwizardBlogApplicationHealthCheck extends HealthCheck {
  private static final String HEALTHY = "The Dropwizard blog Service is healthy for read and write";
  private static final String UNHEALTHY = "The Dropwizard blog Service is not healthy. ";
  private static final String MESSAGE_PLACEHOLDER = "{}";

  private final PartsService partsService;

  public DropwizardBlogApplicationHealthCheck(PartsService partsService) {
    this.partsService = partsService;
  }

  @Override
  public Result check() throws Exception {
    String mySqlHealthStatus = partsService.performHealthCheck();

    if (mySqlHealthStatus == null) {
      return Result.healthy(HEALTHY);
    } else {
      return Result.unhealthy(UNHEALTHY + MESSAGE_PLACEHOLDER, mySqlHealthStatus);
    }
  }
}

Adding Authentication

Dropwizard supports basic authentication and OAuth. Here. I will show you how to protect your service with OAuth. However, due to complexity, I have omitted an underlying DB structure and just showed how it is wrapped. Implementing to full scale should not be an issue starting from here. Dropwizard has two important interfaces we need to implement.

The first one is Authenticator. Our class should implement the authenticate method, which should check if the given access token is a valid one. So I would call this as a first gate to the application. If succeeded, it should return a principal. This principal is our actual user with its role. The role is important for another Dropwizard interface we need to implement. This one is Authorizer, and it is responsible for checking whether the user has sufficient permissions to access a certain resource. So, if you go back and check our resource class, you will see that it requires the admin role for accessing its endpoints. These annotations can be per method also. Dropwizard authorization support is a separate Maven module, so we need to add it to dependencies:

<dependency>
  <groupId>io.dropwizard</groupId>
  <artifactId>dropwizard-auth</artifactId>
  <version>${dropwizard.version}</version>
</dependency>

Here are the classes from our example that doesn’t actually do anything smart, but it’s a skeleton for a full-scale OAuth authorization:

import java.util.Optional;

import io.dropwizard.auth.AuthenticationException;
import io.dropwizard.auth.Authenticator;

public class DropwizardBlogAuthenticator implements Authenticator<String, User> {
  @Override
  public Optional<User> authenticate(String token) throws AuthenticationException {
    if ("test_token".equals(token)) {
      return Optional.of(new User());
    }
    return Optional.empty();
  }
}
import java.util.Objects;

import io.dropwizard.auth.Authorizer;

public class DropwizardBlogAuthorizer implements Authorizer<User> {
  @Override
  public boolean authorize(User principal, String role) {
    // Allow any logged in user.
    if (Objects.nonNull(principal)) {
      return true;
    }
    return false;
  }
}
import java.security.Principal;

public class User implements Principal {
  private int id;
  private String username;
  private String password;

  public int getId() {
    return id;
  }

  public void setId(int id) {
    this.id = id;
  }

  public String getUsername() {
    return username;
  }

  public void setUsername(String username) {
    this.username = username;
  }

  public String getPassword() {
    return password;
  }

  public void setPassword(String password) {
    this.password = password;
  }

  @Override
  public String getName() {
    return username;
  }
}

Unit Tests in Dropwizard

Let’s add some unit tests to our application. I will stick to testing Dropwizard specific parts of the code, in our case Representation and Resource. We will need to add following dependencies to our Maven file:

<dependency>
  <groupId>io.dropwizard</groupId>
  <artifactId>dropwizard-testing</artifactId>
  <version>${dropwizard.version}</version>
</dependency>
<dependency>
  <groupId>org.mockito</groupId>
  <artifactId>mockito-core</artifactId>
  <version>${mockito.version}</version>
  <scope>test</scope>
</dependency>

For testing representation, we will also need a sample JSON file to test against. So let’s create fixtures/part.json under src/test/resources:

{
  "id": 1,
  "name": "testPartName",
  "code": "testPartCode"
}

And here is the JUnit test class:

import static io.dropwizard.testing.FixtureHelpers.fixture;
import static org.assertj.core.api.Assertions.assertThat;

import org.junit.Test;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.toptal.blog.model.Part;

import io.dropwizard.jackson.Jackson;

public class RepresentationTest {
  private static final ObjectMapper MAPPER = Jackson.newObjectMapper();
  private static final String PART_JSON = "fixtures/part.json";
  private static final String TEST_PART_NAME = "testPartName";
  private static final String TEST_PART_CODE = "testPartCode";

  @Test
  public void serializesToJSON() throws Exception {
    final Part part = new Part(1, TEST_PART_NAME, TEST_PART_CODE);

    final String expected =
        MAPPER.writeValueAsString(MAPPER.readValue(fixture(PART_JSON), Part.class));

    assertThat(MAPPER.writeValueAsString(part)).isEqualTo(expected);
  }

  @Test
  public void deserializesFromJSON() throws Exception {
    final Part part = new Part(1, TEST_PART_NAME, TEST_PART_CODE);

    assertThat(MAPPER.readValue(fixture(PART_JSON), Part.class).getId()).isEqualTo(part.getId());
    assertThat(MAPPER.readValue(fixture(PART_JSON), Part.class).getName())
        .isEqualTo(part.getName());
    assertThat(MAPPER.readValue(fixture(PART_JSON), Part.class).getCode())
        .isEqualTo(part.getCode());
  }
}

When it comes to testing resources, the main point of testing Dropwizard is that you’re actually behaving as an HTTP client, sending HTTP requests against resources. So, you’re not testing methods as you would usually do in a common case. Here is the example for our PartsResource class:

public class PartsResourceTest {
  private static final String SUCCESS = "Success...";
  private static final String TEST_PART_NAME = "testPartName";
  private static final String TEST_PART_CODE = "testPartCode";
  private static final String PARTS_ENDPOINT = "/parts";

  private static final PartsService partsService = mock(PartsService.class);

  @ClassRule
  public static final ResourceTestRule resources =
      ResourceTestRule.builder().addResource(new PartsResource(partsService)).build();

  private final Part part = new Part(1, TEST_PART_NAME, TEST_PART_CODE);

  @Before
  public void setup() {
    when(partsService.getPart(eq(1))).thenReturn(part);
    List<Part> parts = new ArrayList<>();
    parts.add(part);
    when(partsService.getParts()).thenReturn(parts);
    when(partsService.createPart(any(Part.class))).thenReturn(part);
    when(partsService.editPart(any(Part.class))).thenReturn(part);
    when(partsService.deletePart(eq(1))).thenReturn(SUCCESS);
  }

  @After
  public void tearDown() {
    reset(partsService);
  }

  @Test
  public void testGetPart() {
    Part partResponse = resources.target(PARTS_ENDPOINT + "/1").request()
        .get(TestPartRepresentation.class).getData();
    assertThat(partResponse.getId()).isEqualTo(part.getId());
    assertThat(partResponse.getName()).isEqualTo(part.getName());
    assertThat(partResponse.getCode()).isEqualTo(part.getCode());
    verify(partsService).getPart(1);
  }

  @Test
  public void testGetParts() {
    List<Part> parts =
        resources.target(PARTS_ENDPOINT).request().get(TestPartsRepresentation.class).getData();
    assertThat(parts.size()).isEqualTo(1);
    assertThat(parts.get(0).getId()).isEqualTo(part.getId());
    assertThat(parts.get(0).getName()).isEqualTo(part.getName());
    assertThat(parts.get(0).getCode()).isEqualTo(part.getCode());
    verify(partsService).getParts();
  }

  @Test
  public void testCreatePart() {
    Part newPart = resources.target(PARTS_ENDPOINT).request()
        .post(Entity.entity(part, MediaType.APPLICATION_JSON_TYPE), TestPartRepresentation.class)
        .getData();
    assertNotNull(newPart);
    assertThat(newPart.getId()).isEqualTo(part.getId());
    assertThat(newPart.getName()).isEqualTo(part.getName());
    assertThat(newPart.getCode()).isEqualTo(part.getCode());
    verify(partsService).createPart(any(Part.class));
  }

  @Test
  public void testEditPart() {
    Part editedPart = resources.target(PARTS_ENDPOINT + "/1").request()
        .put(Entity.entity(part, MediaType.APPLICATION_JSON_TYPE), TestPartRepresentation.class)
        .getData();
    assertNotNull(editedPart);
    assertThat(editedPart.getId()).isEqualTo(part.getId());
    assertThat(editedPart.getName()).isEqualTo(part.getName());
    assertThat(editedPart.getCode()).isEqualTo(part.getCode());
    verify(partsService).editPart(any(Part.class));
  }

  @Test
  public void testDeletePart() {
    assertThat(resources.target(PARTS_ENDPOINT + "/1").request()
        .delete(TestDeleteRepresentation.class).getData()).isEqualTo(SUCCESS);
    verify(partsService).deletePart(1);
  }

  private static class TestPartRepresentation extends Representation<Part> {

  }

  private static class TestPartsRepresentation extends Representation<List<Part>> {

  }

  private static class TestDeleteRepresentation extends Representation<String> {

  }
}

Build Your Dropwizard Application

Best practice is to build the single FAT JAR file which contains all of the .class files required to run your application. The same JAR file can be deployed to the different environment from testing to the production without any change in the dependency libraries. To start building our example application as a fat JAR, we need to configure a Maven plugin called maven-shade. You have to add the following entries in the plugins section of your pom.xml file.

Here is the sample Maven configuration for building the JAR file.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.endava</groupId>
  <artifactId>dropwizard-blog</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <name>Dropwizard Blog example</name>

  <properties>
    <dropwizard.version>1.1.0</dropwizard.version>
    <mockito.version>2.7.12</mockito.version>
    <mysql.connector.version>6.0.6</mysql.connector.version>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
  </properties>

  <dependencies>
    <dependency>
      <groupId>io.dropwizard</groupId>
      <artifactId>dropwizard-core</artifactId>
      <version>${dropwizard.version}</version>
    </dependency>
    <dependency>
      <groupId>io.dropwizard</groupId>
      <artifactId>dropwizard-jdbi</artifactId>
      <version>${dropwizard.version}</version>
    </dependency>
    <dependency>
      <groupId>io.dropwizard</groupId>
      <artifactId>dropwizard-auth</artifactId>
      <version>${dropwizard.version}</version>
    </dependency>
    <dependency>
      <groupId>io.dropwizard</groupId>
      <artifactId>dropwizard-testing</artifactId>
      <version>${dropwizard.version}</version>
    </dependency>
    <dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-core</artifactId>
      <version>${mockito.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>${mysql.connector.version}</version>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>2.3</version>
        <configuration>
          <createDependencyReducedPom>true</createDependencyReducedPom>
          <filters>
            <filter>
              <artifact>*:*</artifact>
              <excludes>
                <exclude>META-INF/*.SF</exclude>
                <exclude>META-INF/*.DSA</exclude>
                <exclude>META-INF/*.RSA</exclude>
              </excludes>
            </filter>
          </filters>
        </configuration>
        <executions>
          <execution>
            <phase>package</phase>
            <goals>
              <goal>shade</goal>
            </goals>
            <configuration>
              <transformers>
                <transformer
                  implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" />
                <transformer
                  implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                  <mainClass>com.endava.blog.DropwizardBlogApplication</mainClass>
                </transformer>
              </transformers>
            </configuration>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</project>

Running Your Application

Now, we should be able to run the service. If you have successfully built your JAR file, all you need to do is to open the command prompt and just run the following command to execute your JAR file:

java -jar target/dropwizard-blog-1.0.0.jar server configuration.yml

If all went OK, then you would see something like this:

INFO  [2017-04-23 22:51:14,471] org.eclipse.jetty.util.log: Logging initialized @962ms to org.eclipse.jetty.util.log.Slf4jLog
INFO  [2017-04-23 22:51:14,537] io.dropwizard.server.DefaultServerFactory: Registering jersey handler with root path prefix: /
INFO  [2017-04-23 22:51:14,538] io.dropwizard.server.DefaultServerFactory: Registering admin handler with root path prefix: /
INFO  [2017-04-23 22:51:14,681] io.dropwizard.server.DefaultServerFactory: Registering jersey handler with root path prefix: /
INFO  [2017-04-23 22:51:14,681] io.dropwizard.server.DefaultServerFactory: Registering admin handler with root path prefix: /
INFO  [2017-04-23 22:51:14,682] io.dropwizard.server.ServerFactory: Starting DropwizardBlogApplication
INFO  [2017-04-23 22:51:14,752] org.eclipse.jetty.setuid.SetUIDListener: Opened application@7d57dbb5{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
INFO  [2017-04-23 22:51:14,752] org.eclipse.jetty.setuid.SetUIDListener: Opened admin@630b6190{HTTP/1.1,[http/1.1]}{0.0.0.0:8081}
INFO  [2017-04-23 22:51:14,753] org.eclipse.jetty.server.Server: jetty-9.4.2.v20170220
INFO  [2017-04-23 22:51:15,153] io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources:

    GET     /parts (com.toptal.blog.resource.PartsResource)
    POST    /parts (com.toptal.blog.resource.PartsResource)
    DELETE  /parts/{id} (com.toptal.blog.resource.PartsResource)
    GET     /parts/{id} (com.toptal.blog.resource.PartsResource)
    PUT     /parts/{id} (com.toptal.blog.resource.PartsResource)

INFO  [2017-04-23 22:51:15,154] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@58fa5769{/,null,AVAILABLE}
INFO  [2017-04-23 22:51:15,158] io.dropwizard.setup.AdminEnvironment: tasks = 

    POST    /tasks/log-level (io.dropwizard.servlets.tasks.LogConfigurationTask)
    POST    /tasks/gc (io.dropwizard.servlets.tasks.GarbageCollectionTask)

INFO  [2017-04-23 22:51:15,162] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@3fdcde7a{/,null,AVAILABLE}
INFO  [2017-04-23 22:51:15,176] org.eclipse.jetty.server.AbstractConnector: Started application@7d57dbb5{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
INFO  [2017-04-23 22:51:15,177] org.eclipse.jetty.server.AbstractConnector: Started admin@630b6190{HTTP/1.1,[http/1.1]}{0.0.0.0:8081}
INFO  [2017-04-23 22:51:15,177] org.eclipse.jetty.server.Server: Started @1670ms

Now you have your own Dropwizard application listening on ports 8080 for application requests and 8081 for administration requests.

Note that server configuration.yml is used for starting the HTTP server and passing the YAML configuration file location to the server.

Excellent! Finally, we have implemented a microservice using Dropwizard framework. Now let’s go for a break and have a cup of tea. You have done really good job.

Accessing Resources

You can use any HTTP client like POSTMAN or any else. You should be able to access your server by hitting http://localhost:8080/parts. You should be receiving a message that the credentials are required to access the service. To authenticate, add Authorization header with bearer test_token value. If done successfully, you should see something like:

{
  "code": 200,
  "data": []
}

meaning that your DB is empty. Create your first part by switching HTTP method from GET to POST, and supply this payload:

{
  "name":"My first part",
  "code":"code_of_my_first_part"
}

All other endpoints work in the same manner, so keep playing and enjoy.

How to Change Context Path

By default, Dropwizard application will start and running in the /. For example, if you are not mentioning anything about the context path of the application, by default, the application can be accessed from the URL http://localhost:8080/. If you want to configure your own context path for your application, then please add the following entries to your YAML file.

server:
    applicationContextPath: /application

Wrapping up our Dropwizard Tutorial

Now when you have your Dropwizard REST service up and running, let’s summarize some key advantages or disadvantages of using Dropwizard as a REST framework. It’s absolutely obvious from this post that Dropwizard offers extremely fast bootstrap of your project. And that’s probably the biggest advantage of using Dropwizard.

Also, it will include all the cutting-edge libraries/tools that you will ever need in developing your service. So you definitely do not need to worry about that. It also gives you very nice configuration management. Of course, Dropwizard has some disadvantages as well. By using Dropwizard, you’re kind of restricted to using what Dropwizard offers or supports. You lose some of the freedom you may be used to when developing. But still, I wouldn’t even call it a disadvantage, as this is exactly what makes Dropwizard what it is—easy to set up, easy to develop, but yet a very robust and high-performance REST framework.

In my opinion, adding complexity to the framework by supporting more and more third party libraries would also introduce unnecessary complexity in development.

Feel free to share on social networks. Find the buttons below this post. This opinion article is for informational purposes only.

Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

This is article is written by  Dusan Simonovic

Brought to you by Toptal

Edited by Temitope Adelekan

 

Selling Your Business? Stop Leaving Money on the Table


J9

Key Highlights

  • Current market conditions are prime for selling a business. The market is experiencing high multiples due to plentiful dry powder held by private equity firms, record amounts of cash held by strategic corporate buyers, a low interest rate environment, and high prices for publicly-traded equities.
  • The time it takes to sell generally ranges from five to twelve months. The determining factors around timing include the size of your business and the dynamic balance between buyers and sellers in the market.
  • Valuations are more of an art than a science. The best business valuation methods typically involve cash-flow. Still, the three most commonly utilized valuation calculations are the discounted cash flow, market multiples, and asset valuation.
  • The best practices for maximizing shareholder value include the following:
    • Make sure the business can thrive without you. You need a management team or key employees that can continue to drive cash flow, especially if you plan to exit the business or will have limited involvement in day-to-day operations. You should also broaden your customer base so that the business is not at risk if a couple key customers leave post-sale.
    • Learn the dynamics driving acquisitions in your industry. Many business owners spend their time focused on keeping the business running instead of devoting energy to planning for its sale. Stay apprised of the motivations for financial and strategic buyers in your industry, as this can help you negotiate a higher exit value.
    • Hire the right advisors. Don’t do it alone. An experienced M&A advisor can market your company to a larger group of potential buyers than you can access on your own. Early engagement of an independent valuation specialist can provide a market check on valuation and allow you to incorporate value drivers into your pre-sale planning.
    • Examine and adjust operational efficiencies strategically. If necessary, it could be worth adopting efficient operating procedures before the sale. This may involve investments in new equipment or technology or changes in staffing.
    • Factor tax considerations into sale decisions. Decisions around how to sell your business (merger, sale of stock, sale of assets) should consider tax implications carefully. It is also important to anticipate changes in tax law.

Investing in the Sale

For many business owners, their business represents the culmination of their life’s work and a primary source of wealth. The reasons leading one to sell a business can vary—perhaps a competitor has presented you with an unsolicited, lucrative offer. Or, perhaps you are simply ready to retire. Regardless of your motivation, the sale process can prove to be complex, with considerations including the right time to sell, whether or not to employ advisors, which business valuation method to use, and how to maximize the valuation. Therefore, when thinking about how to sell a business, you will want to maximize the value through a combination of planning and timing. Building a solid exit plan can take several years, and business owners ideally should start planning for a sale 3-5 years before they wish to transition out. You’ve invested in growing your business. When it comes time to sell your business, you must do the same.

The following analysis will help you understand the current acquisition market environment, how long it takes to sell businesses (small and large), other major considerations during the sale, how an accurate price is determined, and how to maximize acquisition value.

Current Market Conditions for Selling a Business

Currently, with acquisition multiples at a record high, market conditions are optimal for selling a business. According to PitchBook, the median EV/EBITDA multiple hit 10.8x in 1Q 2017, a significant difference from the 8.1x multiple in 2010.

J1

The following factors have converged to create a robust market for acquisitions with high acquisition multiples:

Record “Dry Powder” Held by Private Equity Firms

Research company Preqin reports dry powder for private equity buyout funds of $530 billion at the end of 1Q2017, a significant increase from the recent low of approximately $350 billion at the end of 2012. Further, new fundraising by private equity fund managers shows no signs of slowing. In the early part of 2017, Apollo was seeking $20 billion for a new fund, and KKR had raised $13.9 billion for its new fund.

J2

Strategic Corporate Buyers are Holding Record Amounts of Cash

According to Factset, US corporations held $1.54 trillion in cash reserves as of the end of 3Q2016, the highest total in at least ten years, and a dramatic increase from the $700+ billion figure reported in 2007. Of this, much is held overseas, and if repatriated, a portion may be used for acquisitions.

For a strategic buyer, acquisitions can deploy cash reserves and generate returns in excess of corporate treasury bank accounts and investments. Corporations also seek acquisitions that create operating efficiencies or bolster their position in consolidating industries. Consequently, strategic buyers often pay a premium for acquisitions compared to financial buyers such as private equity firms.

Low Interest Rate Environment

Those interested in selling a business benefit from low interest rates, as they directly affect acquisition prices. Duff & Phelps, which publishes a widely-used study of the cost of equity capital, incorporates the ten-year trailing rate on the 20-year Treasury bond in its benchmark figure. The 3.5% figure reflects the low yields of the last ten years. Duff & Phelps’ comparable rate at the end of 2008 was 4.5%. At the same time, the equity risk premium also decreased, from 6.0% to 5.5%.

High Prices for Publicly-traded Equities

Business values are often determined with reference to public equities, and with the S&P 500 and NASDAQat or near record levels, those looking to sell a business benefit from a comparable increase in prices.

All of these factors have led to an acquisition market ideal for selling a business. Large acquisitions have recently been made for eye-popping prices. Over the past year:

  • JAB Holding Company offered to acquire Panera Bread for $7.5 billion, approximately 19.5x Panera’s EBITDA according to Nation’s Restaurant News.
  • Private equity owned Petsmart acquired pet product site Chewy in the largest acquisition of a VC-backed internet retailer. Chewy is one of the fastest-growing eCommerce retailers on the planet.
  • Unilever acquired Dollar Shave Club in 2016 for $1 billion, paying 6.67x 2015 sales and 5x projected 2016 sales.

Despite these favorable conditions, selling a business still requires advance planning and thought. Numerous factors can positively or negatively affect the value of your business. Addressing these issues early can be beneficial when it comes time to sell.

How Long Does Selling a Business Take?

The duration of the sale process varies. One determining factor is the size of your company. As of the end of 2016, the median time a small business was on the market was a little over 5 months (160 days), down from a peak of 200 days in mid-2012. For larger companies, the sale process can take between 5 and 12 months, as indicated below.

J0

As a business valuation expert, my experience is much the same. The owner of a larger business is more likely to employ a M&A advisor to sell the business, and the advisor is more likely to conduct an auction process to maximize the business value. In addition, as the business becomes more complex, the involvement of more people can lengthen the due diligence process. I have led due diligence teams in large acquisitions where we regularly conducted meetings with as many as fifteen people, including specialists from various departments. Inevitably, inboxes became crowded and the frequency of meetings increased. It became more difficult to ensure that everybody involved was on the same page.

The time it takes to sell your business is also based on the dynamic balance of business sellers and business buyers in the market. The importance of this is particularly pronounced in the small business acquisition market, as seen in the chart below. In 2012, fewer buyers had the resources to buy a business, and acquisition financing from banks and other lenders was still negatively affected by the 2008 financial crisis. As the number of buyers and availability of financing increased, the demand by buyers increased, and median time to sell a business decreased.

J4

Considerations in Determining When to Sell Your Business

Your Motivations for Selling

In general, the value of a business is equal to the sum of all expected future cash flows. When the value of the offer is greater than your projected future value of the firm, it’s time to sell.

“Value” can have many meanings. For one, the business may hold financial or strategic value that makes it compelling to an acquirer. Alternately, the business owner may have other financial uses for the sale proceeds—if the return on the alternative investment is higher than on the business, it’s also time to sell.

However, there can be non-financial motivations for selling a business. I frequently see business owners who have spent a significant portion of their lives building a business and are simply ready to move on to the next venture. Others sell for lifestyle reasons: a former client sold several businesses over 20+ years to fund his travels around the world. Had he agreed to stay with these companies post-sale, he would have received higher valuations. Still, the flexibility to travel and pursue adventures remained his priority.

This is consistent with seller surveys. According to a 2016 survey, the top motivation for small business owners to sell their businesses was retirement (40%), followed by burnout (21%) and the desire to own a bigger business (20%).

J5

Business Growth

Above all else, a buyer wants assurance that the cash flows paid for will be realized after the sale. Selling a business will be easier, and the value received by shareholders maximized, if the business is growing and profitable. The ideal time at which to sell a business is when cash flow, growth, and consequent valuations are going to peak. When a seller or buyer anticipates a decline in the rate of growth, it could result in a significant drop in value. As you might expect, this is not a recommended time to pursue a sale.

The importance of growth to business value and sale timing can be illustrated by the Constant (Gordon) Dividend Growth Model: Value of the Stock = Dividend / (Required Rate of Return – Expected Dividend Growth Rate)

Let’s apply this formula to an example. If a business pays $1 million in dividends, and the required rate of return is 13.5%, a business that has no dividend growth, all other factors held constant, would be worth approximately $7.4 million. On the other hand, if the same business is expected to grow 1% per year, the value increases to $8 million. For a company that does not pay dividends, the same principle can be applied to cash flow. In this example, each percent increase in expected growth leads to an 8% increase in value.

Tax Considerations

Just as the legal form of business at a business’ inception is determined by tax considerations, when it comes time to sell a business, the choice among a merger, sale of stock, or sale of assets should also factor in tax implications.

For example, a sale of assets will likely result in capital gain or loss treatment, whereas an employment agreement results in ordinary income and is taxed at a higher rate. Even in a sale of assets, you should allocate the purchase price among assets in a tax-efficient manner. An allocation to inventory or short-lived assets will typically result in more favorable tax treatment than an allocation to real property or goodwill.

Even expectations of a change in the US tax laws can impact the sale of businesses. If the current presidential administration were likely to simplify the tax code and decrease the capital gains tax rate, business owners would likely wait to sell. When I’ve experienced cases such as these, the running joke among M&A professionals was that business sellers would likely live on artificial life support in order to survive into the new tax year and reap higher net proceeds.

Buyer Motivations

The market for acquisitions is dynamic. An owner or manager seeking to sell a business should be aware of industry-specific developments and direct their selling efforts to leverage those trends.

In my acquisitions work for an insurance company, our growth strategy was to acquire companies in markets that were overseas and less competitive. We also focused on acquisitions that would add internet sales to our existing team of insurance agents. Some of our competitors were seeking similar acquisitions. Business owners aware of those industry dynamics were able to develop a business sale strategy based on these dynamics, maximizing shareholder value.

Here are additional examples of industry-specific strategies:

  • A fast-growing business in a slow-growth industry should focus on strategic buyers seeking high growth. In May 2016, food company Hormel paid $286 million for Justin’s, a fast-growing producer of organic nut butters.
  • Companies with a younger customer base can be good acquisitions for established companies in the same space. Wal-Mart recently sought to expand its customer base to younger consumers by spending $200 million on eCommerce startups with direct-to-consumer models, including Jet.com, Moosejaw, Shoebuy, and ModCloth.
  • For private equity buyers, businesses that lead to increased sales, lowered overhead, and increased gross margins continue to be attractive. These buyers are attracted to assets with considerable scope for optimization and efficiency enhancements.
  • For strategic buyers, decisions about capital investments are often made by comparing build vs. buy options. A business that enables a strategic buyer to reach its financial or strategic goals will always have a pool of potential acquirers.

The Value of Advisors

In selling a business, you may be tempted to cut costs and undertake the task alone. However, the utilization of experienced M&A lawyers is always advisable, as contracts allocate the risk of the transaction between parties, and often contain detailed financial terms. Retaining an M&A advisor can also lead to a higher price for the sale of a business. Additional advisors such as accountants or technology and human resource specialists can also add value in specific situations.

As a financial consultant, I worked with a business owner who initially attempted to sell his business on his own by generating his own list of competitors and other potential buyers. After failing, he assembled a team of lawyers and M&A advisors late in the process. Ultimately, this unsuccessful sales attempt tarnished the sale process and raised questions about the value of the business, ultimately leading to a 25% lower sale price. In addition, the owner, who was originally interested in remaining with the business post-sale, was forced to sell to a financial buyer with a different strategic vision. He was soon forced out of the company. Though this was an extreme case, I cannot overstate the importance of building out an experienced team of advisors.

Financial Intermediaries

The two types of financial intermediaries include a) M&A advisors, and b) business brokers.

Business brokers are generally involved in the sale of smaller firms (typically with values of under $5 million). Many business brokers list businesses for sale in an online database with basic information but do not proactively call potential acquirers. With transactions of this size, the broker faces more difficulty “fully marketing” the transaction and contacting a large number of potential strategic and financial buyers. Compared to business brokers, M&A advisors handle larger transactions and engage in more pre-transaction business planning. They also contact a wider variety and larger number of potential buyers.

The Benefits of Using a Financial Intermediary Include:

  • Reduced time and attention necessary from the business owner. The process of selling a business can often last between six and twelve months. Most business owners don’t have the time or ability to supervise each stage of the process without diverting needed attention away from current business operations.
  • Buffer between buyer and seller. This is especially important in situations where the seller of the business is seeking to keep its plans confidential; an intermediary can solicit interest on a “no-names” basis.
  • A level playing field between novice sellers and experienced buyers. Especially with financial buyers or active strategic buyers, the difference in knowledge of the acquisition process can be vast. Private equity buyers can buy dozens of businesses each year, and the most active strategic buyers, such as Google, can acquire 10+ companies in a year. A business owner selling a business will have trouble competing in knowledge.
  • Network of potential buyers and knowledge of marketing pitches. An experienced financial intermediary with a strong network and marketing knowledge is well-positioned to generate interest in your business. If successful, the price at which you can sell your business will be enhanced by creating competition among buyers in an auction process.
  • Experience with the due diligence process and legal documentation. The due diligence process whereby buyers examine the books and records of the business being sold, can be too time-consuming and complex a task for business owners to undertake themselves. In addition, experienced financial intermediaries help create a transaction structure and collaborate with attorneys on legal documentation.

The Drawbacks of Using a Financial Intermediary Include:

Price

Financial intermediaries can either charge a fixed transaction fee, a retainer, or both. The business seller will also be responsible for the expenses of the intermediary.

  • Business broker fees are generally in the range of 10% of the acquisition price. They typically do not charge a retainer, and fees are only paid upon the sale of the business.
  • Fees for M&A advisors vary more widely. The fixed transaction fee for selling a business generally starts in the $40-60,000 range, and many advisors base their “success fees” on the “Double Lehman” formula: 10% of first $1 million of transaction value, 8% of second $1 million, 6% of the third $1 million, 4% of the fourth $1 million, and 2% of everything above that. According to a 2016 survey, typical middle market transaction fees were as follows (based on percentage of transaction value):
    • $10 million 3.5% – 5%
    • $50 million 2% – 3%
    • $100 million 1% – 1.5%
    • $250 million 0.75% – 1%

You should align your incentives with those of the intermediary. If an advisor’s retainer is disproportionately high, their incentive to complete a deal is lessened. In these cases, the business owner should resist fee arrangements that include a relatively large up-front fee. On the other hand, if the “success fee” is disproportionately high and the advisor only receives significant compensation upon a sale, it creates an incentive for the advisor to complete a deal—even a bad one.

Disclosure of sensitive information

An M&A advisor may contact hundreds of potential buyers and circulate confidential business information in an effort to create a robust auction and maximize business value. The mere disclosure that the business owner is considering a sale can significantly impact customers, competitors, and employees. An experienced advisor can limit the risk of confidential information being disclosed.

Independent Valuation Experts

Retaining an independent valuation expert can maximize value, especially when used in conjunction with an M&A advisor. With a large percentage of M&A advisor fees being paid only if a transaction closes, the M&A advisor experiences an inherent conflict of interest. That is, a business cheaply valued will sell more quickly than one that is fully valued. An independent valuation expert provides the business owner with a second opinion and a market check.

As with employing a financial intermediary, the downside of retaining an independent business valuation expert is price. They can also lengthen the sale process. For many businesses, an appraisal can cost between $3,000 and $40,000 and take 4-6 weeks, although more cost-effective options are available for smaller companies. Valuations of larger or more complicated business can take months and be far more costly.

The involvement of experienced merger and acquisition lawyers is critical. After all, structuring a business sale transaction and negotiating the documents are exercises in risk allocation. These documents ensure that the seller will receive the full amount owed to them and will have limited liability post-sale, while also ensuring that buyers receive the value from the acquisition.

To counter typical buyer protection provisions such as representations and warranties or noncompetition and nonsolicitation agreements, experienced legal advisors can help you obtain favorable terms and secure protections for you. This is especially important if you are selling to a business of much larger size, which would inherently have more negotiating power.

J6

Determining the Right Price

Over the years, I’ve come to find that business valuation is as much art as science, as evidenced by the fact that 27% of business sale transactions don’t close. Of those that don’t close, 30% fail because of a gap in valuation. However, experts generally agree that there are three primary methods of business valuation: discounted cash flow, market multiples, and asset valuation.

J7

While all of these methods can prove useful in the right situation, valuing earnings or cash flow will generally provide a more accurate view of the value of the business being sold. Even better, a business owner selling a business knows of an identical business that has been recently sold and knows the price that it has been sold at.

J8

Discounted Cash Flow and Capitalization of Earnings Methods

Absent a recent comparable business sale for benchmarking, discounted cash flow or capitalization of earnings valuation methods can be utilized. On one hand, discounted cash flow models are typically used to model growing businesses, and they estimate pro forma projected cash flows for a reasonable period into the future. These are then discounted back to the present using a market-derived discount rate. Capitalization of earnings models, on the other hand, are used for businesses where future growth is difficult to estimate. This method’s valuations take pro forma earnings and divide them by a capitalization rate.

The pro formas are adjusted for unusual or nonrecurring events and are intended to normalize the numbers. For example, with private companies, it’s not uncommon for executive compensation to vary from industry standards. The model should be adjusted to reflect compensation levels that would be more typical. Similarly, private companies may have contracts with other companies also owned by the owner, and the pro formas should include adjustments if those contracts vary from industry norms.

It is important to note that the appropriate discount rate can be difficult to determine. The discount rate always starts with the “risk-free rate,” a long-term US Treasury bond, and is adjusted upward to take into account the extra risk of buying a business. An equity risk premium is then considered, available from sources such as Duff & Phelps, and may create an additional premium for a smaller company or a company in a more uncertain industry. On top of those adjustments, the discount rate may be adjusted even higher on the basis of “rule of thumb” estimates that the business appraiser believes are appropriate to determine the true risk of the company.

Market Multiples

The starting point for a market multiple valuation is a public company in the same industry. Multiples such as price-to-earnings, price-to-sales, price-to-EBITDA, and price-to-book are widely available from sources such as Bloomberg and Google Finance. The multiples for the public companies are then applied to the appropriate data for the business being valued. Adjustments to the resulting number are then applied to account for the difference in liquidity between publicly traded stock, which can be sold easily, and a controlling interest in a company, especially if it’s privately held.

Though high public comparables are great for owners selling their business, they may not reflect the actual value of the company. This is because the prices for public stocks are strongly influenced by general stock market sentiment and investor enthusiasm for sectors that are currently in favor. For example, a technology company will currently have higher market multiples than companies with similar business prospects because of keen investor interest in stocks in the technology sector. In addition, business valuation experts relying on market multiples often find it difficult to develop an appropriate group of public companies. A business valuation that starts with a broad group of comparable companies may not truly reflect the value of the company being sold.

Asset Valuation

The asset valuation method of valuing a company being sold is generally limited to holding companies or asset-rich companies since the value of a business’ assets have little to do with the company’s future cash flow generation. In the case of a holding company, the value of the company is made up of a collection of other corporations or equity or debt investments. Each asset may have its own policy about cash flow distributions to the holding company, so a discounted cash flow valuation is meaningless.

However, exceptions exist with energy or commodity companies. In the case of natural resource companies, cash flow is important, but the value is ultimately determined by the company’s assets underneath the ground. Similarly, a gold company may provide cash to its owners on a regular basis, but its gold is the most important value driver. The price of gold decreasing from $1,850 per ounce in 2011 to $1,200 per ounce in 2017 will outweigh any change in dividend policy by management.

Maximizing Shareholder Value

A significant portion of businesses that are offered for sale eventually don’t sell. As mentioned previously, one of the major causes is the gap between what the owner believes the business is worth and the price the buyer is willing to pay. Oftentimes, this is because an owner has focused too much on business operations, and not done enough to research or plan for its eventual sale. To avoid this issue, implement the following best practices:

Create a Deep Management Team

The common advice for employees is to “make yourself indispensable”—that is, contribute so much that you become irreplaceable by others. However, for business owners, the best course of action is the opposite: you should ensure that the rest of the team can operate without you. Though you may have been the main point of contact with key customers for years, consider delegating and transitioning these relationships to your team. Otherwise, if and when you leave, there is no guarantee that these clients will stay with the company. The risk of losing important sources of revenue or supply can significantly reduce a purchase price or lead to a failed transaction.

Examine and Adjust Operational Efficiencies Strategically

Examine your current business practices and, if necessary, adopt efficient operating procedures before the sale. This may involve investments in new equipment or technology, or it may mean adding or reducing staff. For example, buyers will be less interested in a business that diverts the time of highly-compensated employees towards tasks that can be done more cost-effectively by others.

I have been involved in many transactions where the pro forma financials and resulting purchase price are adjusted to account for needed or excess employees. If a buyer senses a risk that efficiencies and cost savings are not achievable, they will adjust the purchase price downward. Therefore, implementing these measures before a sale reduces can help justify a higher valuation.

Broaden Your Customer Base

For most businesses, sales revenue dictates the majority of its value. Buyers will always examine the business’ customer base and evaluate the risk of customers leaving after the sale. For businesses with a concentrated customer base, the risk of losing of one or two customers can place downward pressure on the purchase price. You should broaden the customer base to reduce reliance on a small number of key customers.

Alternatively, if you are heavily reliant on a single distribution channel, diversifying the distribution of products or services can also help maximize value. Multiple sources of revenue are always going to lead to a higher valuation.

Build Out Robust Financial Reports and Systems

Buyers need to rely on accurate financial statements and systems to assess the financial performance of a business. I’ve seen many large and complex businesses lack robust accounting and financial processes, relying too heavily on basic financial systems. This represents a risk to buyers. Ultimately, if the buyer can’t rely on the seller’s numbers, the buyer will either adjust the purchase price downward or cancel the transaction completely.

Buyers prefer seller financial statements that are audited by a high-quality, independent, auditing firm. Many business owners use local accounting firms when they start their businesses, and stay with them as the business grows. As a result, the numbers may not properly incorporate procedures that would be used by a larger firm specializing in business accounting. The inability to provide comprehensive and professionally-prepared statements to a buyer might reduce the value of the company.

Conclusion

As a business owner, you have undoubtedly devoted a substantial part of your life to building your business. The decision to sell your business can be simultaneously scary and liberating. Richard Branson recently provided an interesting account of his decision to sell Virgin Records:

“Selling Virgin Records was one of the most difficult decisions I’ve ever had to make. But it was also a necessary and calculated risk. I had never even thought about selling Virgin Records. In fact when EMImade their offer of $1 [billion] in 1992 we had just signed the Rolling Stones which was something we’d been trying to do for twenty years. We had begun life as a small start-up, growing on the back of the success of Mike Oldfield’s Tubular Bells. From a tiny start-up, we grew into the biggest independent record label in the world.

But at the time of this offer we were going through expensive litigation in a court case against British Airways (which we eventually won) following their ‘dirty tricks’ campaign. If we had carried on running both companies they both would have closed…[B]y selling Virgin Records we left both companies in strong positions and kept a lot people in their jobs. Both businesses are still thriving today.”

Investing in advance planning for the sale of your business is critical to realizing a return on the resources you have already put into it. It is natural to think that the time to properly position and sell your business is an unnecessary burden. However, this time is crucial for enhancing the sale price and ultimately helping you realize the full value of the business. The combination of the right team and adequate investment of time can be the difference between simply closing up shop and maximizing a source of future wealth.

Like Branson, whether you choose to spend this future wealth on a remote island in the sun or on your next venture, well, that’s up to you!

KEY LEARNINGS

Why do people sell their businesses?

In general, when the value of an offer is greater than the sum of your projected future cash flows, it’s time to sell. There can be non-financial motivations for selling as well. Examples include: retirement, burnout, and the desire to own a bigger business.

Feel free to share on social networks. Find the buttons below this post. Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

This is article is written by  Jeffrey Mazer

Brought to you by Toptal

Edited by Temitope Adelekan

 

Is a Cashless Society the New Reality?


F11

Key Takeaways

Only have a minute? Here are the salient points from the article:

  • Many countries (Sweden and India) and regions (EU) are adopting cashless habits or policies. Driven by “contactless” pay technology, increasing digital penetration, costs of using cash, and policy initiatives, the idea of a cashless society is no longer a figment of the imagination.
  • In the near term, we are likely to witness a transition to less-cash societies, rather than a switch to cashless societies. Cash still accounts for 85% of total consumer transactions globally. Among established alternatives to cash, cards are the fastest growing payment instrument.
  • Cashless economy pros: increased scope for monetary policy, reduced tax evasion, less crime and corruption, savings on costs of cash, and accelerated modernization of citizens.
  • Cashless economy cons: potential violation of privacy, increased risk of large scale personal and national security breaches, and technology-dependent financial inclusion.
  • Migrations to a cashless economy include considerations ranging from the purely financial, to those social in nature. Consequently, a country’s specific technological, financial, and social situations will inform its specific benefits, drawbacks, and approach to such a transition.
  • Two case studies in the transition to cashless are 1) India, driven by a governmental digitization and demonetization measures, and 2) Sweden, driven by a high-tech culture and digital consumer habits. In Sweden, the government and central bank play facilitatory roles.
  • Countries best positioned to go cashless include the US, the Netherlands, Japan, Germany, France, Belgium, Spain, Czech Republic, China, and Brazil.

Money is Technology. Will it Be Replaced?

From barter to cash to checks to online banking, money is an evolving technology that has been part of human history for thousands of years. While cash is expected to remain a significant payment instrument in the near future, factors such as “contactless” pay systems, increasing mobile penetration, and high costs of cash (ATM fees for individuals, cash storage for businesses, currency printing for governments, etc.) are prompting society to reconsider its ubiquity. Some experts support less-cash operations, arguing that high denomination notes should be phased out as smaller bills slowly fall towards disuse. Others are more extreme, declaring a war on cash and advocating for an outright ban on physical currency.

We conclude that we are likely approaching a less-cash future, not a completely cashless future. And, while progress has been made in this transition, it has hardly been universal or uniform. A migration to a cashless economy includes considerations ranging from the purely financial to those social in nature. Consequently, a country’s specific technological, financial, and social situations will inform its specific benefits, drawbacks, and approach to such a transition.

The following discussion of cashless societies pertains to a shift whereby physical cash is replaced by its digital equivalent. Money will still serve as a unit of account and store of value, but no longer as a physical medium of exchange. This piece delves into current global payment trends, the pros and cons of a cashless society, an analysis of country readiness, and case studies of India and Sweden.

Despite the adoption of digital payment methods, global cash use remains high. In fact, cash still accounts for 85% of all consumer transactions globally. Across the world, cash in circulation has remained stable, with the ratio of cash circulation to GDP even increasing across major markets. It continues to be resilient because it provides anonymity and universality to the payer. According to a 2016 report, cash is still expected to remain a significant payment method in the near future. However, services based on immediate payments are more efficient than cash and are expected to accelerate the move to digital payments.

F1

Global non-cash transaction volumes reached 387 billion in 2014, experiencing an unprecedented growth rate of 8.9%. This increase was primarily driven by close to 17% growth in developing markets, compared to 6% in mature markets.

F2

Among established alternatives to cash, cards—debit cards in particular—have been the fastest growing payment instrument since 2010. Meanwhile, check usage has declined consistently for the past thirteen years. More recently, the emergence of mobile card readers, electronic networks for processing large volumes of credit and debit transactions, and digitized private currency have threatened the prevalence of cash.

F3

Though cash will remain prevalent for the foreseeable future, a migration to a cashless society is undoubtedly underway in certain countries. Sweden has long embraced cashless transactions, and the EU has imposed restrictions on large cash payments. In 2014, China had the fourth largest non-cash transaction market by volume, behind only the US, the Eurozone, and Brazil. Financial analysts have estimated that by 2020, eCommerce in China will be worth more than eCommerce in the US, the UK, Japan, Germany, and France combined. So, what are the drivers behind such a major shift?

Pros of a Cashless Society

Increased scope for monetary policy: In normal times, people choose cash’s convenience (at a zero interest rate) over other safe assets offering higher yields. During economic downturns, governments have difficulty stimulating the economy by lowering interest rates, because people choose to hold cash instead. Therefore, due to the existence of paper currency, governments and central banks possess limited power to stimulate economic growth. This is known as the zero lower bound theory.

However, in a cashless society, the inability of consumers to withdraw money from the financial system and store it in physical cash would provide governments and central banks with greater control of the economy through monetary policy. In particular, the unusual solution of a negative interest rate during economic downturns could more effectively be introduced. In a negative interest rate environment, people would paybanks to store their deposits, instead of earning interest on their deposits. This is intended to incentivize banks to lend more. It is also meant to encourage businesses and individuals to invest, lend, and spend money rather than hoard it. In short, a cashless society would enable governments and central banks to more effectively utilize negative interest rates. If -0.5% doesn’t create enough stimulus, perhaps -1% will. If -1% still doesn’t do the trick, then perhaps -3%. In theory, negative interest rates do not have limits to how low they can go. Carnegie Mellon’s Marvin Goodfriend argues in favour of negative interest rates, contending that they would allow central banks to independently pursue monetary policies to stabilize domestic employment and inflation.

Reduced tax evasion: Digital money and money services would bring about increased transparency in transactions, providing governments with enhanced abilities to track and analyze citizens’ financial activities. Ultimately, this would decrease tax evasion and increase tax payouts to the government. A 2016 study conducted by the nonpartisan Centre for Studies in Economics and Finance (CSEF) studied the effects of electronic payments on tax evasion in Europe. CSEF found that the use of electronic payments such as debit and credit cards reduced tax evasion and that there was a positive statistical relationship between cash withdrawals and tax evasion.

Though difficult to pinpoint, experts estimate that the tax evasion amounts to between $100 billion and $700 billion a year in the US The IRS estimates that in 2006, taxes not paid voluntarily were over $450 billion, with a gap of $385 billion still remaining after tax collection efforts. These costs would be even higher in Europe, where tax rates are even higher.

Less crime in black markets: The anonymity and untraceability of paper currency facilitate the operations of corrupt activities. In a cashless society, the elimination of this medium of exchange would disrupt their normal operations and force them to rethink their business models. As Peter Sands writes for the Harvard Kennedy School, without high denomination notes, those engaged in illicit activities would face higher costs and greater risks of detection.

The size of the black market, or shadow economy, is substantial. Estimates of its size in the US start at around 8% of GDP. In Europe, where taxes are higher and regulation more onerous, estimates suggest that the size of the underground economy is considerably larger than in the US.

According to Harvard economist Kenneth Rogoff, there is an enormous difference between the amount of currency most OECD countries have in circulation, compared to the amount that can be traced to legal usage in domestic economies. Currency not in the domestic legal economy or in the global economy is mainly in the domestic underground economy. As of March 2013, there was $1.3 trillion US currency in circulation. This translates to around $4,000 for every man, woman and child living in the United States. Further, nearly 78% of the total currency value was in $100 bills, meaning more than thirty $100 bills per person. By contrast, denominations of $10 and under accounted for less than 4% of the total value of currency in use.

F4

Savings on costs of cash: Nations can benefit from the shift to cashless transactions by saving on the cost of cash. These costs of cash include ATM fees for individuals, cash storage and transportation expenses for businesses, and currency printing costs for governments. According to research conducted by the Tufts Fletcher School of Law and Diplomacy, the aggregate cost of cash in the US is $200 billion annually. The estimated cost of cash is MXN 3-6 billion annually in Mexico, and over Rs 200 billion annually in India.

F5

Proponents claim that cashless transactions and elimination of cash costs can be advantageous for poor individuals and small businesses. These are the parties for which the costs of cash are disproportionately borne. For individuals, cash imposes a regressive tax and impacts the unbanked the most. The unbanked pay four times more in fees to access their money than those with bank accounts and are at five times higher risk of paying cash access fees on payroll and EBT cards.

For businesses, paper currency must be stored, guarded, and accounted for. Mom-and-pop stores, many of which operate in poor neighborhoods and rural areas, often cannot afford security and cash transportation services. Removing cash from the equation could result in savings for the marginalized. As The Fletcher School’s Bhaskar Chakravorti declares, “It is time we acknowledged the cash paradox: While cash may be considered the poor man’s best friend, it also places a disproportionate burden on the poor.”

Fostering the adoption of new wireless technologies: A cashless society could accelerate the path to digitization, pushing those who might otherwise be reluctant—or previously have no need—to modernize. According to the McKinsey Global Institute, digital finance could provide an additional $2.1 trillion of loans to individuals and small businesses as providers gain improved abilities to assess credit risk for a larger pool of borrowers. Financial services providers would also benefit from a shift from traditional to digital accounts, potentially saving $400 billion annually in servicing fees.

Cons of a Cashless Society

In addition to myriad potential benefits, this transition might be accompanied by several drawbacks:

Violation of privacy: In a cashless society where all money, payments, and money services are digitized, there is a concern around “big brother” surveillance activities by the government and organizations seeking to profit from the traceable data. Some opponents of cashless societies view the ability to take one’s ability to spend cash anonymously as central to freedom within society.

Elaine Ou, a former lecturer at the University of Sydney, equates a cashless society with the surrendering of individual monetary control to financial institutions. As she articulates in her editorial, “A world without paper money is a world without money. Money belongs to its current holder. It doesn’t matter if a banknote was lost or stolen at some point in the past. Money is current; that’s why it’s called currency! A bank deposit, however, grants custody of money to the bank. An account balance is not actually money, but a claim on money.”

Importantly, a claim on money means that every transaction in a cashless society would have to pass through a financial gatekeeper. If banks and other private institutions hold our money, they would also have the right to refuse transactions at their discretion. Inevitably, then, certain payments would not be given due process. After all, previous attempts to prevent money laundering have sometimes resulted in the removal of financial services access for legitimate individuals, businesses, and charities.

Increased risk of security breach: A cashless society may bring about increased risks to personal and national security. From a personal security standpoint, the risks we already experience when we lose credit cards or our phones would only be exacerbated in an environment without paper currency. Today, becoming a victim of digital hackers can lead to denied payments, identity theft, account takeover, fraudulent transactions and data breaches. These risks would still exist in a cashless society, though the volume of cashless transactions and points of exposure for the average consumer would be much higher. What’s more, without cash reserves in households and businesses, a cyber attack or computer malfunction would leave consumers without a safety net.

From a national security perspective, during financial and global crises, cash has repeatedly demonstrated its importance for consumers and members of society. During the financial crisis of 2008, cash provided a safe haven for consumers. For example, the Australian Reserve Bank experienced a 12% rise in demand for cash in late 2008 in response to the financial uncertainty.

Decreased financial inclusion: While some experts, as mentioned previously, believe a shift to cashless transactions could eliminate the costs of cash for the marginalized, others believe this shift would exacerbate the existing issue of financial inclusion. While utilizing cash is direct and simple, moving to a cashless society would place pressure on these individuals to sign up for formal financial services, something the poorest might be unable to do.

In developing countries, 2.5 billion people do not have access to traditional financial services. Traditional banking infrastructure struggles to serve low-income customers, particularly in rural areas. The issue of financial inclusion also extends to modern countries: in the US and Western Europe, nearly 70 million and 100 million are unbanked, respectively.

A method of combatting these effects is the promotion of mobile connectivity. According to research published by GSMA, mobile phones and mobile banking have been powerful tools for bringing access to payments, transfers, credit, and savings to unbanked people. In conjunction with governmental support and incentives, mobile is uniquely positioned to overcome the challenges of payments: It provides a platform to combine digital identity, digital value, and digital authentication for low-cost access to financial services.

While it may seem counterintuitive for developing countries to have high mobile money services usage, many off-the-grid families and small businesses own basic mobile phones with alphanumeric keypads and black and white display. Another enabling factor includes regulators, who are increasingly recognizing the role that non-bank providers of financial services can play in fostering financial inclusion. Consequently, they are establishing more enabling regulatory frameworks. In 47 out of 89 markets where mobile money is available, regulation allows banks and non-banks to provide mobile money services in a sustainable way. In addition, it would be helpful for governments to promote access to financial services or the technology necessary for the services as a public good, just as it does with education and water.

Currently, 255 mobile money services are now live across 89 countries, and the number of registered mobile money accounts globally also grew to around 300 million in 2014. Globally, there are now fifteen countries with more mobile money accounts than bank accounts, indicating that mobile money is a key enabler of financial inclusion.

F6

A successful example of mobile in emerging markets is M-Pesa, which is transforming the financial landscape in Kenya. Launched in 2007 by large mobile network operators, the service allows users to deposit money into an account stored on their cell phone, to send balances via SMS text messages to other users, including retailers, and to redeem deposits for cash. It is considered to be a branchless banking service, whereby customers can withdraw and deposit money with an extensive network of agents that act as banking agents. In 2014, there were 81,000 M-Pesa agents in Kenya alone. To better understand the penetration of the service, consider the following: M-Pesa is used by 17 million Kenyans, equivalent to more than two-thirds of the adult population, and around 25% of the country’s GDP flows through it. M-Pesa has also launched in India, Albania, Romania, and multiple African countries.

The above benefits and drawbacks can help us understand the reasoning behind a country’s decision to go cashless, or the timing at which a country may go cashless. Let us now examine which countries are currently best positioned to adopt cashlessness.

Which Countries Are Best Positioned to Go Cashless?

According to the Harvard Business Review, the first major consideration is the aggregate cost of cash, which will identify the countries with the most to gain from the change. The cost of cash is derived from: 1) The cost of ATM maintenance for banks, 2) Cost of cash to consumers, including the costs of obtaining cash, such as transport to ATMs and ATM fees, and 3) The tax gap, which is the estimated amount of tax money owed to the government but goes uncollected or unreported due to cash transactions.

The map below represents these aggregate costs of cash. A caveat in its interpretation: Countries indicated with “low” costs are not necessarily closer to being cashless societies. The map simply indicates that the costs of cash in these countries are relatively lower than other countries.

F7

Here is a breakdown of cost of cash categories, borne by different parties:

  • ATM maintenance costs borne by banking institutions: These are disproportionately high in many parts of the developing world, such as sub-Saharan Africa and Latin America. It is also high in geographically large, sparsely populated countries, such as Canada, Russia, and Australia, where there are many logistics challenges.
  • The absolute cost of cash to consumers: These costs are high in some of the world’s most populous countries, including Indonesia, Nigeria, Bangladesh, India, China, and the United States. They are high in many of the major European countries, such as Germany and France, as well as in Japan. These costs are lower in several Scandinavian countries with relatively entrenched mobile payments systems, such as Sweden, Finland, and Denmark, as well as countries with rapidly evolving mobile payment systems, such as South Korea and Kenya.
  • Tax gap as a cost to governments: Tends to be higher in emerging markets, where shadow economies tend to be larger. In India, for example, the tax gap could be as large as two-thirds of overall taxes owed. The larger the tax gap, the more the country has to gain from a migration to a cashless economy.

The second major consideration in determining a country’s readiness is its level of digital advancement and infrastructure. Developing countries in Asia and Latin America are leading in momentum. They also benefit from ongoing investment, remaining attractive destinations for startups and for private equity and venture capital. On the other hand, most Western and Northern European countries, Australia, and Japan have been slowing down in momentum.

F8

Based upon these factors, the US, Netherlands, Japan, Germany, France, Belgium, Spain, Czech Republic, China, and Brazil have the greatest potential for unlocking value by policy and innovation led migration to a cashless society.

Clearly, various regions have different benefits to consider and are at varying levels of readiness for a cashless economy. The following section details case studies of two countries already experiencing such a transition. The first country we explore is India, whose transition has largely been propelled by the government. The second country we examine is tech-forward Sweden, which has experienced a more natural progression towards a cashless society, prompting the Swedish government’s role to be more of a facilitator.

Spotlight on India’s Demonetization Campaign

India is an interesting case study because of its historical reliance on cash and its lower digital evolution index. Yet, it stands to benefit significantly with regards to financial inclusion, corruption, and relatively high costs of cash. Interestingly, much of the transition has been initiated and driven by the government through both voluntary and involuntary measures. It seems, then, that the Indian government believes the benefits of a cashless society significantly outweigh its potential issues.

A shocking mandate occurred in November 2016, when India’s Prime Minister Narendra Modi made a surprise public address via live television. He announced that after 50 days, all 500 ($7.50) and 1,000 ($15) rupee notes, representing 86% of the currency in circulation, would cease to be legal tender. While citizens were permitted to exchange 500 and 1,000 rupee notes for higher denominations, the government prohibited individuals from exchanging more than 4,000 rupees ($60) at a time.

Prior to the announcement, over 95% of India’s transactions were cash, 90% of vendors did not have means of accepting electronic payment, and nearly half of the population did not have bank accounts. Modi’s ostensible motivation was to reduce corruption, believing that these high denomination notes were used to finance terrorism, fund illegal drug sales, fuel the black market, drive counterfeiting, and pay bribes. Since the announcement, however, the claimed objective of the exercise has transitioned from rooting out black money to modernizing the Indian economy.

Modernization has been a priority for the Indian government in the last decade, during which it has taken several measures to accelerate digitization. In 2009, the government launched Aadhaar to improve digital identity. Then, to provide citizens with bank accounts, the government sanctioned the launch of 11 payment banks, offering incentives to open accounts. When the United Payment Interface launched in 2016 as a way for banks to transfer money directly to one another, the Reserve Bank of India advocated for it. After the demonetization announcement last year, the government introduced incentives for digital purchasing, including discounts on petrol, diesel, and railway season tickets.

Perhaps unsurprisingly, the controversial demonetization policy has been met with both pointed criticism and praise. Here are a few details regarding the results:

Effects on citizens: In the immediate aftermath of the announcement, chaos erupted. Long lines formed at ATMs and banks, and altercations broke out as people waited for hours (sometimes for over twelve hours). Often, repeated trips to the bank were necessary. Banks, which also had not been notified of the change, did not have enough of the high denomination notes for the masses looking to redeem their canceled notes.

Monishankar Prasad, a New Delhi-based author, pointed out that the unbanked citizens and the poor were taken off guard. Without access to structural resources, these people were hit hardest.

University of Pennsylvania’s management professor Mauro F. Guillen, however, argues that the long-term benefits outweigh the short-term costs: “In the short term, [the move] could stifle some businesses that are legal and clean, if they use cash payments. But everyone will adjust. And while it can hurt some small businesses and individuals, it is better to do it than not.”

Effects on corruption: It was originally thought that the shadow economy would not be able to exchange or deposit their illicitly obtained wealth. Theoretically, by having canceled banknotes going unredeemed, the Indian government would then add a large sum of assets on the balance sheet, an amount estimated to be $45 billion. However, even with strict limitations around banknote exchanges, the black market was still able to unload much of their money. It is still being investigated as to how they were able to achieve this, but it seems a variety of tactics were utilized, including cutting deals with corrupt bankers, threatening bank officials, or utilizing inactive bank accounts. India’s Enforcement Directorate has been investigating bank branches throughout the country.

While experts acknowledge that the move could create a temporary obstacle in black economy operations, many question its efficacy as a long-term solution. They assert that certain trades and areas cannot be digitized just by willing it. Others warn that it is only a matter of time until the black market utilizes alternative financing techniques such as the US dollar or the pound sterling.

Effects on digitization and modernization: As expected, Modi’s demonetization campaign has proved to be a boon for the country’s e-payment providers. For example, Paytm reported a 3x surge in new users while Oxigen Wallet’s daily average users increased by 167% since demonetization began.

Market and political response: The market has downgraded India’s growth in the short term, but is optimistic that they will be outweighed by the long-term benefits. In December 2016, S&P Global Ratings decreased its estimated economic growth rate for 2016-17 by one full percentage point to 6.9% to reflect the disruption. However, Dharmakirti Joshi, Chief Economist of Crisil, a subsidiary of S&P Global, noted that “We expect lower private consumption in fiscal 2017, but expect demand to revive and growth to rebound in fiscal 2018. India should shortly revert back to an 8% annual growth trajectory.” The Wall Street Journal similarly comments that while GDP growth slowed as a result of the demonetization policy, “India is expected to remain one of the world’s fastest-growing large economies.”

In addition, the March 2017 victory of the BJP party, of which Modi is a part, is viewed by some as an endorsement of Modi’s groundbreaking demonetization policy. The stock markets rallied at the prospect of the BJP victory. The next trading day, the Bombay Stock Exchange Sensitive Index (Sensex) shot up 496 points (1.71%). The National Stock Exchange 50-share index also closed at more than 9,000 for the first time in history.

Spotlight on Sweden

Next, we move onto Sweden, a country with lower costs of cash and advanced digital infrastructure. Unlike India, consumers’ habits and the markets have largely dictated the transition to a cashless society, with the government and central bank (Riksbank) helping to facilitate the change. Sweden is also one of the first countries to adopt a negative interest rate, leveraging its citizens’ cashless preferences to stimulate the economy.

The Swedes are notorious for their embrace of technology and cashless transactions. Swedish buses and the Stockholm metro do not accept cash, and retailers are legally entitled to refuse coins and notes. Street vendors and even churches increasingly prefer electronic payment. Hooked on the convenience of digital money, cash transactions made up a mere 2% of the value of all payments in Sweden last year. In shops, cash is now used for less than 20% of transactions, half the number five years ago and significantly below the global average of 75%. When it comes to alternative payment methods, Swedes use cards three times as often as the average European, yielding an average of 207 payments per card in 2015. Preferring to pay digitally, Swedes have low demand for cash, which is dropping at a rate of 20% a year. As a result, about 900 of Sweden’s 1,600 bank branches no longer keep cash on hand or take cash deposits. Cash machines are being dismantled, especially in rural areas. Circulation of the Swedish krona has fallen from around 106 billion in 2009 to 80 billion last year.

F9

Taking note of its citizens’ preferences, the central bank and other major banks jointly created the popular digital wallet Swish to enable payments between bank accounts in real time. Riksbank’s involvement in Swish’s creation and the credibility it lends to the service has been critical to Swish’s success. Swish is now used by close to half of the Swedish population. In addition, capitalizing upon its citizens’ embrace of technology and cashless transactions, Sweden is one of the first countries to have its central bank adopt a negative nominal interest rate. Earlier this year, in a continuous battle against deflation, the Riksbank held its nominal interest rate at negative 0.5% and stressed chances of further cuts. Although retail banks have yet to utilize negative interest rates, it may only be a matter of time until they do so.

For individual consumers, the move towards cashlessness has led to a number of complex issues. Last year, the number of electronic fraud cases reached 140,000, which represents more than double the amount more than a decade ago. In addition, there is concern that the ease of electronic payments in combination with negative interest rates is driving soaring debt burdens. Their fears are not unfounded, as Swedish household debt is at an all-time high, with the average Swedish household debt to disposable income metric at a record high of 180%. Sweden is also currently experiencing a housing crisis; money is so cheap to borrow that the Swedes are funneling cash into property.

Critics also point to concerns that pensioners in Sweden who use cash may be marginalized and excluded; only 50% of Swedish National Pensioners’ Organisation members use cash-cards everywhere. Perhaps for these reasons, cash is not dead—Swedish central bank Riksbank predicts it will decline quickly, but will still be circulating in twenty years.

The Paths to a Cashless World Are Many and Varied

A cashless society is no longer just a figment of the imagination. While cash still reigns globally on aggregate, progress towards cashlessness is particularly pronounced in specific countries. Additionally, it is clear that there is no “one size fits all” blanket solution for such a major shift. Because the migration involves technological, financial, and social considerations, we can expect each country to select an approach according to their unique positioning and capabilities.

Regardless of approach, the transition to digital money and money services will have profound implications on some of the most basic aspects of society. This great change presents opportunities for governments to improve issues surrounding income inequality and poverty, and opportunities for entrepreneurs to create innovative, disruptive businesses.

Feel free to share on social networks. Find the buttons below this post. Remember, information/knowledge is never enough. Let us spread the word!

Follow my blog for more insightful articles: http://temitopeadelekan.com

LinkedIn connect: Temitope Adelekan

Twitter: @taymethorpenj

 

 

 

This is article is written by  Melissa Lin

Brought to you by Toptal

Edited by Temitope Adelekan