Friday, 2 March 2018

// // Leave a Comment

Few Post Giving 404 Not Found | Wordpress

Wordpress Post Giving 404 Not Found Because of (.) Dot In Permalink

We have a website which is a search engine to search npm packages and get the steps to install them. Website was running fine. After submission of sitemap on Goggle WebMaster  Tool, it gave lots of errors with error message that some posts are giving 404 errors.
Google Webmaster:

On checking, It got observed that all permalinks having a dot "." in the title are giving 404. To get rid of this issue add the following snippet to function.php of the theme being used:

remove_filter('sanitize_title', 'sanitize_title_with_dashes');
add_filter('sanitize_fitler', 'sanitize_filter_se_119069');
function sanitize_filter_se_119069($title, $raw_title = '', $context = 'display'){
    $title = strip_tags($title);
    // Preserve escaped octets.
    $title = preg_replace('|%([a-fA-F0-9][a-fA-F0-9])|', '---$1---', $title);
    // Remove percent signs that are not part of an octet.
    $title = str_replace('%', '', $title);
    // Restore octets.
    $title = preg_replace('|---([a-fA-F0-9][a-fA-F0-9])---|', '%$1', $title);

    if (seems_utf8($title)) {
        if (function_exists('mb_strtolower')) {
            $title = mb_strtolower($title, 'UTF-8');
        $title = utf8_uri_encode($title, 200);

    $title = strtolower($title);
    $title = preg_replace('/&.+?;/', '', $title); // kill entities

//  $title = str_replace('.', '-', $title);

    if ( 'save' == $context ) {
        // Convert nbsp, ndash and mdash to hyphens
        $title = str_replace( array( '%c2%a0', '%e2%80%93', '%e2%80%94' ), '-', $title );

        // Strip these characters entirely
        $title = str_replace( array(
            // iexcl and iquest
            '%c2%a1', '%c2%bf',
            // angle quotes
            '%c2%ab', '%c2%bb', '%e2%80%b9', '%e2%80%ba',
            // curly quotes
            '%e2%80%98', '%e2%80%99', '%e2%80%9c', '%e2%80%9d',
            '%e2%80%9a', '%e2%80%9b', '%e2%80%9e', '%e2%80%9f',
            // copy, reg, deg, hellip and trade
            '%c2%a9', '%c2%ae', '%c2%b0', '%e2%80%a6', '%e2%84%a2',
            // grave accent, acute accent, macron, caron
            '%cc%80', '%cc%81', '%cc%84', '%cc%8c',
        ), '', $title );

        // Convert times to x
        $title = str_replace( '%c3%97', 'x', $title );

    $title = preg_replace('/[^%a-z0-9 _-]/', '', $title);
    $title = preg_replace('/\s+/', '-', $title);
    $title = preg_replace('|-+|', '-', $title);
    $title = trim($title, '-');

    return $title;

After adding this, just restart the php-fpm process with command sudo service php-fpm restart and the pages which were giving 404 will start giving 200 response code. 
Read More

Saturday, 17 February 2018

// // Leave a Comment

How to Make REST API in Django

Django API

Create REST API Using Core Django

Django is a popular web framework in the Python world for rapid development. Here we will learn, how to quickly make REST API (Representational State Transfer Application Programming Interface) using Django.

# Prerequisites:

Make sure, you have following things installed

# End Result:

We will make a small application that will keep track of all my friends with their respective details.
  • API to list all friends. 
  • API to get detail of a particular friend. 
  • API to add a friend in the list. 
  • API to delete a friend from the list. 

# Video Tutorial

After installing prerequisites, follow the below steps:

# Create Project

django-admin startproject my_project

Here the project name is my_project. This command will create a directory called my_project in your current working directory, with some pre populated directory and files. This directory is root directory for you project.

# Create App

cd my_project
python startapp my_app

The app name is my_app. You can create as many app you want inside one project. This command has created my_app directory inside the project's root directory.

# Database Creation and Setup

Let's use Sqlite as our database in this tutorial. If you are planning to use database other than Sqlite, you need to do appropriate settings in file of project.

To create tables, edit the file of the app.

vim my_app/

Here, we will create a table called, MyFriendList. To do this paste the following code in the opened file.
from django.db import models

class MyFriendList(models.Model):
    friend_name = models.CharField(max_length=200)
    mobile_no = models.IntegerField()

mobile_no = models.IntegerField() 

Table MyFriendList has two columns. First column is friend_name of char type and of max_length 200. Second column is mobile_no of integer type.

To actually create this Table in our database system, we need to run migrate commands. But before that, we need to add this app in our project. For this, open our project's config file.

vim my_project/ 

Now, search for INSTALLED_APPS setting and edit it. It should look like this.


Now, we will run migrate commands.

python makemigrations my_app
python migrate

First command, will make the migration files (a file that contains the equivalent SQL statements for Database) and second command will execute it. After executing migrate command, you should see migration of files that are not created by us. These are some required tables by Django.

# Create Views

Views are used to define business logic. Here we will make three views, to list, create and delete the friend list. First open the file of my_app application.

vim my_app/

Now paste the following code in the file:

from django.http import JsonResponse
from django.views import View
from django.views.decorators.csrf import csrf_exempt
from django.utils.decorators import method_decorator
import json

from .models import MyFriendList

class MyFriend(View):
    def get(self, request):
        friend_list = list(MyFriendList.objects.values())
        return JsonResponse(friend_list, safe=False) 

    # To turn off CSRF validation (not recommended in production)
    def dispatch(self, request, *args, **kwargs):
        return super(MyFriend, self).dispatch(request, *args, **kwargs)

    def post(self, request):
        data = request.body.decode('utf8')
        data = json.loads(data)
            new_friend = MyFriendList(friend_name=data["friend_name"], mobile_no=data["mobile_no"])
            return JsonResponse({"created": data}, safe=False)
            return JsonResponse({"error": "not a valid data"}, safe=False)

class MyFriendDetail(View):

    def dispatch(self, request, *args, **kwargs):
        return super(MyFriendDetail, self).dispatch(request, *args, **kwargs)

    def get(self, request, pk):
        friend_list = {"friend": list(MyFriendList.objects.filter(pk=pk).values())}
        return JsonResponse(friend_list, safe=False)

    def put(self, request, pk):
        data = request.body.decode('utf8')
        data = json.loads(data)
            new_friend = MyFriendList.objects.get(pk=pk)
            data_key = list(data.keys())
            for key in data_key:
                if key == "friend_name":
                    new_friend.friend_name = data[key]
                if key == "mobile_no":
                    new_friend.mobile_no = data[key]
            return JsonResponse({"updated": data}, safe=False)
        except MyFriendList.DoesNotExist:
            return JsonResponse({"error": "Your friend having provided primary key does not exist"}, safe=False)
            return JsonResponse({"error": "not a valid data"}, safe=False)

    def delete(self, request, pk):
            new_friend = MyFriendList.objects.get(pk=pk)
            return JsonResponse({"deleted": True}, safe=False)
            return JsonResponse({"error": "not a valid primary key"}, safe=False)

Let's understand the above code:

  • First, we imported the necessary modules and models for our application.
  • We have class MyFriend, containing get and post method that serves for GET and POST HTTP method respectively.
  • We have override the dispatch method of View class to turn off CSRF validation. However, its not recommended for production. We just turned it off for sake of simplicity.
  • get method of MyFriend class, lists all the friends.
  • post method of MyFriend class, allows to add new friend in the existing list.
  • After that, we have MyFriendDetail class, containing get, put and delete method that serves for GET, PUT and DELETE HTTP method respectively.
  • get method of MyFriendDetail class, takes one argument called "pk" (primary key) and provides the complete detail of the friend having that primary key.
  • put method of MyFriendDetail class, takes the "pk" to update the particular friend's details.
  • delete method of MyFriendDetail class, also takes the "pk" argument, and deletes the friend from the friend list having that pk as primary key. 

# Route Setup

Now, we need to configure endpoint or URL to access the app. First define route for my_app by editing project's file.

vim my_project/

Now, edit this file as follow:

from django.contrib import admin
from django.urls import path, include

urlpatterns = [
    path('friends/', include('my_app.urls')),

Here, we have defined the route friends/ for our app my_app. You can define different routes here for multiple apps.

Now define routes for the API of my_app. To do this create file inside my_app.

vim my_app/ 

Now, paste following code inside the file:

from django.urls import path

from my_app.views import MyFriend, MyFriendDetail

urlpatterns = [
    path('', MyFriend.as_view()),
    path('<int:pk>/', MyFriendDetail.as_view()),

Let's understand the URL patterns

We have defined empty pattern for MyFreind class, so that our API will have structure like <server_name:port>/friends/. (Note: friends/ came from my_app application route that we defined in project wide

For class MyFriendList, we have defined integer pattern for the key pk. Here API will have structure like <server_name:port>/friends/1/.

So, we are done with all the coding. We have model, views and urls in place. Now start the development server.

python runserver 

Performing system checks...

System check identified no issues (0 silenced).
February 17, 2018 - 18:57:06
Django version 2.0.2, using settings 'my_project.settings'
Starting development server at
Quit the server with CONTROL-C.

Hurray! Our Django REST APIs are ready on Let's try it now.

I am using Postman app to hit the API.

# Add a friend in friend list.

API: /friends/
Method: POST
Data: {"friend_name": "abc", "mobile_no": 8968678990}

# Get the friend list.

API: /friends/
Method: GET

# Edit name of a friend

API: /friends/6/
Method: PUT
Data: {"friend_name": "xyz"}

# Fetch detail of a particular friend

API: /friends/6/
Method: GET

# Delete a friend from friend list:

API: /friends/6/
Method: DELETE

Whoa! We successfully tested the CRUD for my_app.

Next step is to learn deployment of Django application. For this checkout this tutorial.
Read More
// // Leave a Comment

Setup EFK Stack | Elasticsearch, Fluentd & Kibana With High Availability

How To Setup Fluentd With High Availability
Setup Fluentd With Forwarder-Aggregator Architecture

Fluentd is one of the most popular alternative to logstash because of its features which are missing in logstash. So before setting up fluentd let's have a look and compare both:
  • Fluentd has builtin architecture for high availability (There could be more than one aggregator)
  • Fluentd consumes less memory as compare to logstash
  • Log parsing, tagging is more easy
  • Tag based log routing is possible
Let's start building our centralized logging system using Elasticsearch, Fluentd and Kibana (EFK).
We will be following the architecture which has fluentd-forwarder(td-agent), fluentd-aggregator, Elasticsearch and Kibana. Fluentd-forwarder (the agent) reads the logs from a file and forward the logs to aggregator. Aggregator decides what should be the index_name and which host of Elasticsearch to send logs. The Elasticsearch is on a separate instance to receive logs and it also have kibana setup to visualise elasticsearch data.

# Architecture:

Following is the architecture for High Availability. Here there are multiple log-forwarder on each application node which are forwarding logs to log-aggregators. There are two aggregators shown in below architecture, if one fails then forwarders start sending logs to the second one.

# Video Tutorial

# Setup Elasticsearch & Kibana

We have already covered the setup of Elasticsearch and Kibana in one of our tutorial. Please follow this post to install Elasticsearch and Kibana.

# Log Pattern

We are considering the log format shown below:

INFO  [2018-02-17 17:14:55,827 +0530] [pool-5-thread-4] [] S3 object deleted, Bucket name: sqs-bucket, Object key: 63c1a5b8-4ddc-4136-b086-df6a8486414a.
INFO  [2018-02-17 17:14:56,124 +0530] [pool-5-thread-9] [] S3 object read, Bucket name: sqs-bucket, Object key: 2cc06f96-283f-4da7-9402-f08aab2df999.

# Log Regex

This regex is based on the logs above and needs to be specified in the source section of td-agent.conf file of forwarder.

/^(?<level>[^ ]*)[ \t]+\[(?<time>[^\]]*)\] \[(?<thread>[^\]]*)\] \[(?<request>[^\]]*)\] (?<class>[^ ]*): (?<message>.*)$/

# Setup fluentd-aggregator

We will setup only one aggregator for this tutorial. However, you may setup two aggregators for high availability. On aggregator instance run following command:

curl -L | sh
sudo apt-get install make libcurl4-gnutls-dev --yes
sudo apt-get install build-essential
sudo /opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-elasticsearch

# After setup edit conf file and customise configuration
sudo vi /etc/td-agent/td-agent.conf

Content of /etc/td-agent/td-agent.conf. Replace host IP with your elasticsearch instance IP.

  @type forward
   port 24224

<match myorg.**>
  @type copy
    @type file
    path /var/log/td-agent/forward.log

    @type elasticsearch_dynamic
    #elasticsearch host IP/domain
    port 9200
    index_name fluentd-${tag_parts[1]+ "-" +"+05:30").strftime(@logstash_dateformat)}

    #logstash_format true
    #logstash_prefix fluentd

    time_format %Y-%m-%dT%H:%M:%S
    #timezone +0530
    include_timestamp true

    flush_interval 10s

Restart fluentd-aggregator process and check the logs with the following command:

sudo service td-agent restart

# check logs
tail -f /var/log/td-agent/td-agent.log

# Setup fluentd-forwarder

To setup forwarder, run following command on application instance.

curl -L | sh
# customise config in file td-agent.conf
sudo vi /etc/td-agent/td-agent.conf

Content of /etc/td-agent/td-agent.conf. Replace path with the path of your application log and aggregator IP with the IP of your aggregator Instance. You may use domain instead of IPs.

<match td.*.*>
  @type tdlog
  apikey YOUR_API_KEY
  buffer_type file
  buffer_path /var/log/td-agent/buffer/td

    @type file
    path /var/log/td-agent/failed_records

## match tag=debug.** and dump to console
<match debug.**>
  @type stdout

## built-in TCP input
## @see

  @type forward
  port 24224

  @type http
  port 8888

## live debugging agent

  @type debug_agent
  port 24230

  @type tail
  path /var/log/myapp.log
  pos_file /var/log/td-agent/myorg.log.pos
  tag myorg.myapp
  format /^(?<level>[^ ]*)[ \t]+\[(?<time>[^\]]*)\] \[(?<thread>[^\]]*)\] \[(?<request>[^\]]*)\] (?<class>[^ ]*): (?<message>.*)$/

  time_format %Y-%m-%d %H:%M:%S,%L %z
  timezone +0530
  time_key time
  keep_time_key true
  types time:time

<match myorg.**>
   @type copy
    @type file
    path /var/log/td-agent/forward.log

    @type forward
    heartbeat_type tcp

    #aggregator IP
    flush_interval 30s

  # secondary host is optional
  # <secondary>
  #    host
  # </secondary>

Restart fluentd-forwarder process and check logs with the following command:

sudo service td-agent restart

# check logs
tail -f /var/log/td-agent/td-agent.log

Now after restarting td-agent on both forwarder and aggregator, you can see data being stored to elasticsearch.  When elasticsearch start receiving data from aggregator, you can make index pattern in kibana and start visualising the logs.

# Create Index Pattern In Kibana

Once you start getting logs in Elasticsearch, You can create an index pattern in kibana to visualise the logs. We have specified the index_name in fluentd to be of format fluentd-myapp-2018.02.12, so we will create an index pattern fluentd-* Follow the below steps shown in pictures to create an index pattern.

Finally after creating index pattern, logs will start appearing in Discover tab of dashboard

Hurray!!! You have successfully setup EFK stack to centralise your logging. 
Read More
// // Leave a Comment

How To Setup Kibana-6 With Elasticsearch-6 On Ubuntu 16.04

Guide To Install Elasticsearch-6 And Kibana-6 

Elasticsearch and kibana are often used together in ELK or EFK setup. To have your kibana dashboard up and running with elasticsearch is very easy. Let's deep dive and setup kibana with elasticsearch. 


  • ubuntu-16.04 OS Instance
  • Security group in place

# Video Tutorial

# First Install java

To set JAVA_HOME edit file /etc/environment and add the content shown below in comments

sudo apt-get update
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
java -version
sudo vi /etc/environment
source /etc/environment

# Install Elasticsearch-6

curl -L -O
echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
sudo apt-get update && sudo apt-get install elasticsearch
ls /etc/init.d/elasticsearch
sudo service elasticsearch status

# Change bind address and JVM heap option as per requirement

Change to in elasticsearch.yml and set -Xms 4g -Xmx 4g in jvm.options

sudo vi /etc/elasticsearch/elasticsearch.yml
sudo vi /etc/elasticsearch/jvm.options

# Setting read replicas to 0 if you are creating single node cluster

curl -XPUT H 'Content-Type: application/json' 'http://localhost:9200/_all/_settings?preserve_existing=false' -d '{"index.number_of_replicas" : "0"}'

# Install Kibana

sudo apt-get update && sudo apt-get install kibana
sudo service kibana restart

# Install nginx

sudo apt-get -y install nginx

# Add nginx config file for kibana

sudo vi /etc/nginx/conf.d/kibana.conf

Replace with your server_name or IP. We will setup auth in next step, hence we have placed a line for auth_basic in kibana.conf

server {
    listen 80;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

# Setup auth

After installing apache2-utils when you run htpasswd, it will ask for a password, provide a password. This username and password would be useful when you try to access kibana from browser.

sudo apt-get install apache2-utils
sudo htpasswd -c /etc/nginx/htpasswd.users efkadmin
sudo service nginx restart

# Web view of Kibana

After successful Setup, hit the IP of kibana. Put username and password and you will see kibana web as shown below.

Read More

Saturday, 10 February 2018

// // Leave a Comment

AWS Lambda Function With RDS MySQL

How To Use AWS Lambda Function With AWS RDS MySQL

If you ever want to have small service that access database and returns some result then you should go for AWS Lambda as it costs you only when it gets invoked. So You will be paying less then an ec2 instance.

Let's go ahead and see how we can create a lambda function which will interact with a database build on AWS RDS MySQL. You can choose your own database like postgresql hosted on an ec2 instance, but here for the sake of simplicity we will be using RDS MySQL.

First get your database Instance Up and running. You will need to have its endpoint, username, password and database name.
  • Get the repository from my github.
  • Create a database instance.
  • Fill in the databases instance details in
  • Make sure you have correct security group settings in place.

Below is the main sample code (

# A lambda function to interact with AWS RDS MySQL

import pymysql
import sys

REGION = 'us-east-1'

rds_host  = ""
name = "appychip"
password = "appychippassword"
db_name = "appychip"

def save_events(event):
    This function fetches content from mysql RDS instance
    result = []
    conn = pymysql.connect(rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
    with conn.cursor() as cur:
        cur.execute("""insert into test (id, name) values( %s, '%s')""" % (event['id'], event['name']))
        cur.execute("""select * from test""")
        for row in cur:
        print "Data from RDS..."
        print result

def main(event, context):

# event = {
#   "id": 777,
#   "name": "appychip"
# }
# context = ""
# main(event, context)

The point here to note is that we are using pymysql library which we need to install in our current working directory where our file is present. To install it you can run the following command:

pip install pymysql -t .

Now, since we have the required library in place, we need to create the zip of our code and upload it on the lambda function. To make a zip run the following command:

zip -r `ls`

Save the lambda function and go ahead running the script by passing a test event to the function which looks like below:

  "id": "1",
  "name": "appychip"

Now go ahead running the lambda function. On successful execution, this will insert a record into mysql.

Hurray!!! you just created a lambda function which interacts with database. To create an API endpoint to your lambda function, checkout our tutorial Scalable architecture using AWS API Gateway, lambda function and dynamodb.

Read More

Tuesday, 12 December 2017

// // 1 comment

Apply SSL certificate in Nginx

Run your Website over https

Https is a protocol which is used for secure communication over internet. Here we will learn how to make our own website to run over https protocol.


  • Public IP or Domain name of the server
  • Sudo privileges on the server
  • Nginx installed

1. Either purchase the CA signed certificate from third parties like Godaddy, Bigrock etc or Create our own self signed certificate.

CA signed certificate will work normally as other htpps site works, but in self signed certificate, you will get exception like this:

So do not apply self signed certificate in production, as it will give above error to your users. It is proper for internal environment only.

To create self signed certificate, follow the below steps:
  1. Create a directory, where you will put our certificates.

    sudo mkdir -p /etc/ssl/certs

  2. Now, move into this directory

    cd /etc/ssl/certs

  3. Create SSL certificate

    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nginx.key -out nginx.crt

    This command will ask you certain questions like:

    Country Name (2 letter code) [AU]:IN
    State or Province Name (full name) [Some-State]:MH
    Locality Name (eg, city) []:Mumbai
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:XYZ, Inc.
    Organizational Unit Name (eg, section) []:ABC
    Common Name (e.g. server FQDN or YOUR name) []
    Email Address []

    Be careful, while providing, Common Name for your certificate, as it certificate will work for that Common Name only. You can provide IP or domain name here. If you want to create single certificate for all your sub domains then you can put entry like "*". Certificate created with this domain name will be valid for all your subdomains that has domain

  4. Above command will generate two files:
    1. Key file (nginx.key)
    2 SSL certificate (nginx.crt)

    Note: As, we specified "-days 365", this certificate will be valid till 365 days from the date of creation.

    Now, we have our self signed certificate.

  5. Now, configure our Nginx configuration file. You must have "Server" block inside your nginx file as below:

    server {
            listen 80 default_server;
            listen [::]:80 default_server ipv6only=on;
            root /usr/share/nginx/html;

    We need to add some extra configuration lines, as below:

    server {
            listen 80 default_server;
            listen [::]:80 default_server ipv6only=on;
            listen 443 ssl;
            root /usr/share/nginx/html;
            ssl_certificate /etc/nginx/ssl/nginx.crt;
            ssl_certificate_key /etc/nginx/ssl/nginx.key;

    Save this configuration, and restart the nginx.

    sudo service nginx restart

    Now, try to access your domain over https. It will work! Thanks.

Read More

Monday, 20 November 2017

// // Leave a Comment

How To Make A Chrome Extension From Scratch

Creating A Chrome Extension To Show Notification

If you are a chrome user then you must have seen some cool chrome extension which are very useful in daily life but did you ever wondered, how they are build? Are you curios to know? If yes, then this post if for you to get started on how to write your own chrome extension.

Here we will be building a chrome extension to show a popup which will have a button and on the click of which a notification will appear.

video Tutorial:

The basic structure of our application looks like following:

  • Manifest file (manifest.json) - It is the main component of chrome extension. It includes information about versioning, permissions, and other useful metadata for extension. It should be placed in the root of project folder. 
  • popup page ( popup.html) - This page will open up a popup on clicking the extension icon. The popup will have a button clicking on which will show a notification. 
  • popup javascript file (popup.js) - The javascript file which contains the logic of showing notification on the click of button in popup. 
  • Favicon image (appychip.png) - An Image to be displayed on the chrome extension bar


 "name": "myextension",
 "description": "Extension to show notification",
 "version": "1.0",

 "content_security_policy":"script-src 'self'; object-src 'self'",

 "browser_action": {
  "default_popup": "popup.html",
  "default_icon": "appychip.png"
 "permissions": ["notifications", "tabs"]


<!-- popup.html -->

 <script src="popup.js"></script>
  Hey there!!!
  <input type="button" id="show" value="show notification"/>


// popup.js

document.addEventListener('DOMContentLoaded', function() {
    var show = document.getElementById('show');
    // onClick's logic below:
    show.addEventListener('click', function() {

 // Let's check if the browser supports notifications
 if (!("Notification" in window)) {
   alert("This browser does not support desktop notification");

 // Let's check if the user is okay to get some notification
 else if (Notification.permission === "granted") {
   // If it's okay let's create a notification
   var notification = new Notification("Hi there!");

 // Otherwise, we need to ask the user for permission
 // Note, Chrome does not implement the permission static property
 // So we have to check for NOT 'denied' instead of 'default'
 else if (Notification.permission !== 'denied') {
   Notification.requestPermission(function (permission) {

       // Whatever the user answers, we make sure we store the information
       if(!('permission' in Notification)) {
         Notification.permission = permission;

       // If the user is okay, let's create a notification
       if (permission === "granted") {
         var notification = new Notification("Hi there!");



Steps To Install Extension:

  • Visit chrome://extensions in your chrome browser and click on "load unpacked extension"
  • Select project directory of chrome extension and the extension must be visible now as shown below:
  • Open a new tab (preferably with url The favicon must be visible by now chrome extension bar.
  • Click on the favicon. This will open a popup which will have button and clicking on which will display notification.

This is how the notification looks. You can beatify the popup and notification and also do some cool stuff like accessing API to fetch data and display it.
Read More