What’s FodyWeaver

In almost all .NET applications you encounter lot of boilerplate code which although has pretty standard implementation but is quite important from implementation point of view. Some common examples are overridden ToString, Equals  method for you model classes. Or implementing IDisposable or INotifyPropertyChanged. In all these examples most of the code is pretty standard.

One of the ways of automating this code is to automatically inject this in your generated MSIL (which stands for Microsoft intermediate language).This process of injecting code post compilation directly into generated intermediate language is known as .NET assembly weaving or IL weaving (similar to byte code weaving in Java).

If you do this before compilation i.e. add source code lines before compilation it is known as Source code weaving.

What is Fody and how it works.

Fody is an extensible library for weaving .NET assembly written by Simon Cropp. It adds post build task in MS build pipeline to manipulate generated IL. Usually this requires lot of plumbing code which is what fody provides. Fody provides extensible add-in model where anybody can use core fody package (providing basic code for adding post build task and manipulating IL) and create their own specific add-ins. E.g. Equals. Fody generates Equals and GetHashCode method implementation and ToString. Fody generates ToString implementation for your classes.

Below figure shows how the whole process works.

You can also use the same technique for implementing AOP, logging, profiling methods etc.

Fody uses Mono.Cecil which is a library for manipulating intermediate language and in usage feels quite similar to .NET reflection APIs. As far as IL manipulation goes Mono.Cecil seems like only game in the town. Chances are high that if you have used plug-in for IL manipulation they would be using Mono.Cecil internally. Continue reading “What’s FodyWeaver”

What is Low Code

Why low code matters right now

For modern businesses, the ability to pivot and adapt to a rapidly shifting world of work has become essential. How your company responds to these changes—and how quickly you respond—can make or break your long-term success. As it becomes more of a necessity, your company needs to rapidly embrace digital transformation to meet customers’ growing demands and keep up with the competition.

Your business can implement this change in many ways and using low-code development is an increasingly popular approach for businesses to adapt to constantly changing conditions.

What’s the appeal? With a low-code development platform, companies can quickly create and deliver business applications without having to rely on a large amount of manual programming or user training. This not only saves time and makes companies more efficient and productive, but it also allows you to focus on the apps that require the most attention, like customer experience and automation apps.

What is low code?

Simply put, low code is a smart way to put power back into your hands. Specifically, it’s a method of software and application development that allows your workers to create enterprise-grade business apps using drag-and-drop functionality and visual guidance—with very little or no coding experience or knowledge.

The appeal of this approach is that almost anyone can develop an app. Citizen developers—employees in your organization who don’t necessarily have technical or programming expertise—can quickly and efficiently build applications on low-code platforms.

Enabling anyone to create applications using this simplified application development method frees up your professional developers and IT teams to devote their time to creating more complex, business-critical apps. And when these developers use low-code platforms, it helps them work faster because they don’t have to write code line by line.

If you’re searching for an application development method that uses visual modeling, you might also consider using a no-code application platform. Low code and no code share some similarities, including their key purpose. Both platforms were created as an alternative to traditional application development, and they both make it easier for your business to enable citizen developers to build new apps.

There are some key differentiators, though. With low-code platforms, developers need at least a basic understanding of programming, while no-code platforms take a drag-and-drop approach and don’t require any coding knowledge. No code is ideal for building smaller apps, and its capabilities can be limited. Low code, on the other hand, tends to allow developers to create more sophisticated apps.

Low-code development

Low-code platforms have many benefits for your business, such as providing tools to boost organizational agility and empowering employees to quickly build professional-grade apps that solve business challenges.

Low code can help your business:

  • Save time by enabling almost anyone in your company to develop apps rather than waiting for development teams to do it.
  • Boost productivity by freeing up your developers’ schedules so they can focus on building apps that require coding, and ultimately, helping teams work more efficiently.
  • Reduce costs by allowing your business to use existing staff as citizen developers rather than hiring new developers. This enables your professional developers to create more apps in less time.
  • Become more flexible by using low-code platforms to easily change apps without having to spend a lot of time writing code.

Low-code application development does come with challenges, however. While low code doesn’t require a lot of manual coding, your IT teams aren’t completely off the hook—they still must make themselves available to guide both citizen and professional developers along the way.

Low-code development can also make it harder for your company to see what your employees are building, which can create security issues. With on-premises low-code platforms, it’s often difficult for IT to have any visibility into development projects. However, that issue can be solved by moving to the cloud, which allows you to apply rule-based permissions.

What can businesses build with low-code platforms?

Here are just a few practical use cases about what you can create with a low-code platform:

  • Customer experience apps. With the rise of digital transformation, today’s customers expect easy-to-use, well-functioning mobile apps. Low code allows you to modernize existing apps and significantly speed up new application development.
  • Line-of-business apps. When apps are outdated, can no longer support current processes, and aren’t providing an ideal user experience, a low-code platform can help migrate these apps and simplify process automation.
  • Automation and efficiency apps. These apps give you the tools you need to automate tasks and reduce your dependence on manual, paper-based processes.

 

Kubernetes PV, PVC, Storage Class, and Provisioner

Before diving into the world of Kubernetes, let’s take a look at what Kubernetes was built on – Docker.

Docker is famous for its simplicity and ease of use. That’s what made Docker popular and became the foundation of Kubernetes. A Docker container is stateless and fast. It can be destroyed and recreated without paying much of a price. But it’s hard to live a meaningful life with amnesia. No matter if it’s your database, your key-value store, or just some raw data. Everyone needs persistent storage.

It’s straightforward to create persistent storage in Docker. In the early versions, the user can use -v to create either a new anonymous undetermined sized empty volume or a bind-mount to a directory on the host. During those days, there was no third party interface allowing you to hook into Docker directly, though it could be easily worked around by bind-mounting the directory which had already been mounted by storage vendor on the host. In August 2015, Docker released v1.8, which officially introduced the volume plugin to allow third-parties to hook up their storage solutions. The installed volume plugin would be called by Docker to create/delete/mount/umount/get/list related volumes. And each volume would have a name. That’s it. The framework of the volume plugin remains largely the same till this day.

Persistent Volume and Persistent Volume Claim

When you try to figure out how to create persistent storage in Kubernetes, the first two concepts you will likely encounter are Persistent Volume (PV), and Persistent Volume Claim (PVC).

So, what are they? Which one of the two works like the volume in Docker?

In fact, neither works like the volume in Docker. In addition to PV and PVC, there is also a Volume concept in Kubernetes, but it’s not like the one in Docker. We will talk about it later.

After you read a bit more about PV and PVC, you would likely realize that PV is the allocated storage and PVC is the request to use that storage. If you have some experiences previously with cloud computing or storage, you would likely think PV is a storage pool and PVC is a volume which would be carved out from the storage pool.

But no, that’s not what PV and PVC are. In Kubernetes, one PV maps to one PVC, and vice versa. It’s one to one mapping exclusively.

I’ve explained those multiple times to people with extensive experience with storage and cloud computing. They almost always scratch their heads after, and cannot make sense of it.

I can’t make sense of it either when I encountered those two concepts for the first time.

Let’s quote the definition of PV and PVC here:

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).

The keywords you need to pay attention to here are by an administrator and by a user.

In short, Kubernetes separates the basic unit of storage into two concepts. PV is a piece of storage which supposed to be pre-allocated by an admin. And PVC is a request for a piece of storage by a user.

It said that Kubernetes expects the admin to allocate various sized PVs beforehand. When the user creates PVC to request a piece of storage, Kubernetes will try to match that PVC with a pre-allocated PV. If a match can be found, the PVC will be bound to the PV, and the user will start to use that pre-allocated piece of storage.

This is different from the traditional approach, in which the admin is not responsible for allocating every piece of storage. The admin just needs to give the user permission to access a certain storage pool, and decide what’s the quota for the user, then leave the user to carve out the needed pieces of the storage from the storage pool.

But in Kubernetes’s design, PV has already been carved out from the storage pool, waiting to be matched with PVC. The user can only request the pre-allocated, fixed-size pieces of storage. This results in two things:

  1. If the user only needs a 1 GiB volume, but the smallest PV available is 1 TiB, the user would have to use that 1 TiB volume. Later, the 1 TiB volume won’t be available to any other users, who are probably going to need much more than 1 GiB. This would not only cause the waste of the storage space, but also would result in a situation where some workloads cannot be started due to the resource constraint, while other workloads are using excessive amounts of resources that they don’t need.
  2. In order to alleviate the first issue, the administrator either needs to constantly communicate with the user regarding what size/performance of the storage the user needs at the time of the workload creation, or predict the demand and pre-allocate the PV accordingly.

As a result, it’s hard to enforce the separate of allocation (PV) and usage (PVC). In the real world, I don’t see people using PV and PVC as the way they were designed for. Most likely admins quickly give up the power of creating PV and delegate it to users. Since PV and PVC are still one to one binding, the existence of PVC become unnecessary.

So in my opinion, the use case PV and PVC designed for is “uncommon”, to say the least.

I hope someone with more Kubernetes history background can chip in here, to help me understand why Kubernetes is designed in this way. Continue reading “Kubernetes PV, PVC, Storage Class, and Provisioner”

Everyday ‘Solution Architectural’ Thinking

As an Architect you can expect, at some point in your career, to be involved in a critical frontline, turbulent project or programme. During such times, you need to rely on technical, political and social skills gained over several years of working in the field of ICT.

Today’s blog (prepared on the London-Coventry train)  is a reminder of some of the “basics” that the average Solution Architect must consider when working on complex projects. These ‘system orientated’ considerations remain at the forefront during project engagement elements , requiring consideration during analysis, design and build stages of system construction / delivery at the same time of maintaining the line of sight for the outcome.

As with most things in life, the list presented is obviously subject to the area you operate in, for example, if you are working on a Manufacturing Execution System (MES) solution then your primary focus for the project would be on the real-time supervisory control and data acquisition systems and processes. However, if you are working on a Retail Banking Application then your focus would be biased towards compliance and reporting.

The list presented is not exhaustive and kept general to provide insight. However If we examine most ICT projects, we often find  a common pattern i.e. one of collecting raw data, transforming this into  information then allowing digital processes to consume and generate reports allowing businesses to execute an event or action that adds value to the Organisation.

This movement is presented in the diagram below, with thought  bubbles representing areas of thought which are further listed in the tables below;

Everyday Solution Architectural Focus Points during a project

Continue reading “Everyday ‘Solution Architectural’ Thinking”

The Solution Architecture Life Cycle

Solution Architects as previously mentioned are responsible for working with programmes and projects to ensure a solution to problems is designed, costed, procured, built and delivered into the organisations which often results in delivering a new process outcome and IT capability.

Solution Architects address a wide spectrum of problems and issues ranging from the simple to the complex and thus require a wide range of skills (technical / business).

The work of the Solution Architect can be broken down into distinct stages and categorised into the following areas:

SolutionArchitectLifeCycle

Solution Architecture Life Cycle

Each layer of the Solution Architect Lifecycle is briefly discussed below. However, it must be noted that the focus at each layer will be aligned to the top layer i.e. the problem/issue.

Identification

Often a problem requires a working group to establish if something is worth considering e.g. bid on a project or to discuss a ‘pattern’ that is emerging in the technology landscape which requires investigation from the reporting systems e.g. Capacity & Performance / Security incidents.

Solution Architects are often engaged at this stage to provide advice on possible options for resolving a problem and to assist in triggering the next phase of the activity.

Defining the context of the problem/issue

No project or programme of work in real terms commences without a Business Case i.e. a document that captures the reasoning for initiating a project or task with basic costings and outcomes documented. If the problem issue is a Technical one then the Solution Architect is required to elaborate (in simplistic terms) the context of the problem from the systems viewpoint.

Capturing the Requirements

During the requirements capture phase the Solution Architect will spend much of his/her time focusing on the system elements of the requirements and trying to understand the system components characteristics.

During this stage, there will be a bias towards the non-functional elements of the system.

During this stage, a Minimum Viable Product can be elicited from stakeholders, i.e. the minimum components and effort that will be required to deliver the functional and non-functional requirements can be sketched to define further costs analysis.

It must be noted that the requirements must also encapsulate any legal compliance issues e.g. GDPR requirement and any Enterprise Architectural directives.

Defining Product Backlog and or Level 0 Systems Architecture

Once the problem is known, documented and decomposed into a set of clearly defined functional and non-functional requirements a level 0 systems architecture can be produced to outline a solution.

Where possible reusable components should be highlighted to shorten time to market and increase savings to the project.

At this stage, the outcome should be a level 0 design and in many cases, result in a product backlog for the solution

The level 0 design will facilitate the project to determine the cost and effort involved to deliver the outcome required.

Designing the Solution and breaking down the deliverables into sprints

At this stage, a detailed analysis of the Level 0 is undertaken and elaborated further to deliver a detailed design document and the subsequent technical sprints to deliver the project.

Depending on the Solution it may be prudent to produce a low-level design to support the Solution Design.

Options for realising the solution and enacting

I have discussed previously the options that are available for analysis from “do nothing” to “Build” but from a cost / do ability view the option should be selected that leverages existing relationships /services and best value for money.

Delivering Solution into production

Developing, procuring  or modifying a system requires deployment into a production environment and thus the Solution Architect must be capable of defining the environments (test, prod, pre-prod) for the route to live. Often this will involve working with the Service Architects to design the Service and the operational elements (often extrapolated from the NFRs) of the system.

If we were to take all the elements above and assign time that the Solution Architect would be involved in the project then we could produce a graph like the one below;

SolArchEffort

In summary, the Solution Architect is an important role and requires skills that evolve with each engagement and has a role to play from problem realisation to delivery into service of a solution.

The article from https://dalbanger.wordpress.com/2017/05/07/the-solution-architecture-life-cycle/

AccessToken Vs ID Token Vs Refresh Token, What? Why? When?

Introduction

This article demonstrates different types of tokens in OpenID Connect. At the end of this article, you will have clear understanding on the below points,
  1. About JSON Web Tokens (JWT)
  2. What is Access Token?
  3. Example of Access Token
  4. Why we need Access Token?
  5. What is ID Token?
  6. Example of ID Token
  7. Why we need ID Token?
  8. What is Refresh Token?
  9. Example of Refresh Token
  10. Why we need Refresh Token?
Related reads

About JSON Web Tokens (JWT)

JWT i.e. Json Web Tokens, are an important piece in ensuring trust and security in your application. JWT allows claims such as user data to be represented in a secure manner.
A JWT is represented as a sequence of base64url encoded values that are separated by dot character. It’s ideal format is like “Header.Payload.Signature”, where header keeps metadata for the token. The payload is basically the claims of the entity (typically user) and a signature for the signed token.
The Signed token is generated by combining the encoded JWT header and Payload and it is signed by using encryption algorithm like HMAC SHA–256. The signature private key is always held by server so it will be able to verify existing token as well as sign new token.
JWT could be used as an opaque identifier and could be inspected for additional information – such as identity attributes which it represents as claims.
Sample JWT token format could look like,
AccessToken Vs ID Token Vs Refresh Token

What is Access Token?

Access tokens are credentials used to access protected resources.
Access tokens are used as bearer tokens. A bearer token means that the bearer (who holds the access token) can access authorized resources without further identification.
Because of this, it is important that bearer tokens be protected.
These tokens usually have a short lifespan for security purpose. When it expires, the user must authenticate again to get a new access token limiting the exposure of the fact that it is a bearer token.
Access token must never be used for authentication. Access tokens cannot tell if the user has authenticated. The only user information the access token processes is the user id, located in sub claims.
The application receives an access token after a user successfully authenticates and authorizes access. Itis usually in JWT format but do not have to be.
The application should treat access tokens as opaque strings since they are meant for APIs. Your application should not attempt to decode them or expect receive token in a
particular format.
This token does not contain any information about the user itself besides theirs ID (“sub”).It only contains authorization information about which actions the application is allowed to perform at the API (“scope”). This is what makes it useful for securing an API, but not for authenticating a user.
An access token is put in the Authorization header of your request, usually looks like Bearer “access_token” that the API you are calling can verify and grant you access.
Example of Access Token
Here is the sample response from the token endpoint! The response includes the ID token and access token. Your application can use the access token to make API requests on behalf of the user.

Continue reading “AccessToken Vs ID Token Vs Refresh Token, What? Why? When?”

Service-to-service communication

The article is from Microsoft.

Moving from the front-end client, we now address back-end microservices communicate with each other.

When constructing a cloud-native application, you’ll want to be sensitive to how back-end services communicate with each other. Ideally, the less inter-service communication, the better. However, avoidance isn’t always possible as back-end services often rely on one another to complete an operation.

There are several widely accepted approaches to implementing cross-service communication. The type of communication interaction will often determine the best approach.

Consider the following interaction types:

  • Query – when a calling microservice requires a response from a called microservice, such as, “Hey, give me the buyer information for a given customer Id.”
  • Command – when the calling microservice needs another microservice to execute an action but doesn’t require a response, such as, “Hey, just ship this order.”
  • Event – when a microservice, called the publisher, raises an event that state has changed or an action has occurred. Other microservices, called subscribers, who are interested, can react to the event appropriately. The publisher and the subscribers aren’t aware of each other.

Microservice systems typically use a combination of these interaction types when executing operations that require cross-service interaction. Let’s take a close look at each and how you might implement them.

Queries

Many times, one microservice might need to query another, requiring an immediate response to complete an operation. A shopping basket microservice may need product information and a price to add an item to its basket. There are many approaches for implementing query operations.

Request/Response Messaging

One option for implementing this scenario is for the calling back-end microservice to make direct HTTP requests to the microservices it needs to query, shown in Figure 4-8.

Direct HTTP communication

Figure 4-8. Direct HTTP communication

While direct HTTP calls between microservices are relatively simple to implement, care should be taken to minimize this practice. To start, these calls are always synchronous and will block the operation until a result is returned or the request times outs. What were once self-contained, independent services, able to evolve independently and deploy frequently, now become coupled to each other. As coupling among microservices increase, their architectural benefits diminish.

Executing an infrequent request that makes a single direct HTTP call to another microservice might be acceptable for some systems. However, high-volume calls that invoke direct HTTP calls to multiple microservices aren’t advisable. They can increase latency and negatively impact the performance, scalability, and availability of your system. Even worse, a long series of direct HTTP communication can lead to deep and complex chains of synchronous microservices calls, shown in Figure 4-9:

Chaining HTTP queries Continue reading “Service-to-service communication”

Front-end client communication

The article is from Microsoft.

In a cloud-native system, front-end clients (mobile, web, and desktop applications) require a communication channel to interact with independent back-end microservices.

What are the options?

To keep things simple, a front-end client could directly communicate with the back-end microservices, shown in Figure 4-2.

Direct client to service communication

Figure 4-2. Direct client to service communication

With this approach, each microservice has a public endpoint that is accessible by front-end clients. In a production environment, you’d place a load balancer in front of the microservices, routing traffic proportionately.

While simple to implement, direct client communication would be acceptable only for simple microservice applications. This pattern tightly couples front-end clients to core back-end services, opening the door for a number of problems, including:

  • Client susceptibility to back-end service refactoring.
  • A wider attack surface as core back-end services are directly exposed.
  • Duplication of cross-cutting concerns across each microservice.
  • Overly complex client code – clients must keep track of multiple endpoints and handle failures in a resilient way.

Instead, a widely accepted cloud design pattern is to implement an API Gateway Service between the front-end applications and back-end services. The pattern is shown in Figure 4-3.

API Gateway Pattern Continue reading “Front-end client communication”

Cloud Native

The article is from Microsoft.

Stop what you’re doing and text ten of your colleagues. Ask them to define the term “Cloud Native”. Good chance you’ll get ten different answers.

Cloud native is all about changing the way you think about constructing critical business systems.

Cloud-native systems are designed to embrace rapid change, large scale, and resilience.

The Cloud Native Computing Foundation provides an official definition:

Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.

Applications have become increasingly complex with users demanding more and more. Users expect rapid responsiveness, innovative features, and zero downtime. Performance problems, recurring errors, and the inability to move fast are no longer acceptable. They’ll easily move to your competitor.

Cloud native is about speed and agility. Business systems are evolving from enabling business capabilities to being weapons of strategic transformation that accelerate business velocity and growth. It’s imperative to get ideas to market immediately.

Here are some companies who have implemented these techniques. Think about the speed, agility, and scalability they’ve achieved.

Table 1
Company Experience
Netflix Has 600+ services in production. Deploys hundred times per day.
Uber Has 1,000+ services in production. Deploys several thousand times each week.
WeChat Has 3,000+ services in production. Deploys 1,000 times a day.

As you can see, Netflix, Uber, and WeChat expose systems that consist of hundreds of independent microservices. This architectural style enables them to rapidly respond to market conditions. They can instantaneously update small areas of a live, complex application, and individually scale those areas as needed.

The speed and agility of cloud native come about from a number of factors. Foremost is cloud infrastructure. Five additional foundational pillars shown in Figure 1-3 also provide the bedrock for cloud-native systems.

Cloud-native foundational pillars Continue reading “Cloud Native”

桥接模式 (Bridge Pattern)-结构型模式第五篇

Bridge Pattern
  1. 《设计模式》系列序——写给未来的自己
  2. 工厂方法模式(Factory Method)-创建型模式第一篇
  3. 抽象工厂模式(Abstract Factory Pattern)-创建型模式第二篇
  4. 单例模式(Singleton Pattern)-创建型模式第三篇
  5. 建造者模式(Builder Pattern)-创建型模式第四篇
  6. 原型模式(Prototype Pattern)-创建型模式第五篇
  7. 装饰者模式 (Decorator Pattern)-结构型模式第一篇
  8. 组合模式 (Composite Pattern)-结构型模式第二篇
  9. 适配器模式 (Adapter Pattern)-结构型模式第三篇
  10. 代理模式 (Proxy Pattern)-结构型模式第四篇
  11. 桥接模式 (Bridge Pattern)-结构型模式第五篇

最近换了一份新工作,全新的开始,一切还在奋斗的道路上,自己给自己打个气!

设计模式系列很久没有更新了,废话不多说,开始今天的桥接模式。

定义:

何为Bridge模式?我们知道,实现与抽象之间总是存在着强依赖,Bridge模式的意义就是解耦实现与抽象,这意味着实现与抽象不再有着静态的依赖关系,它们彼此可以拥有独立的变化,从而互不影响。

UML:

Bridge Pattern Continue reading “桥接模式 (Bridge Pattern)-结构型模式第五篇”