Improve Operations, Performance, and Business Metrics with Application Monitoring Solutions

Enhanced user experience is one of the top goals for every SaaS provider. How do you ensure that your customers are experiencing the value you intended your applications to deliver? 

“Customer experience is the next competitive battleground.”

– Jerry Gregoire (the founder and Chairman of Redbird Flight Simulations, Inc.)

Having real-time, non-real-time and partially real-time metrics at your fingertips may help you to stay ahead of expected and unexpected challenges regarding your service performance, business, and operations.  

Why do startups need to monitor their well-tested applications?

Earlier, the software was delivered (or “shipped”) to the customers through a CD and that was the extent of it. The job of the SaaS Provider was completed. However, since the advent of the SaaS and the cloud services, the software developers have visual access to more information about the delivered application, for instance, how customers are using it, which feature is popular with the user, which feature is unpopular among the users, and much more.

This allows the SaaS providers to leave the hardware mindset behind, they no longer have to ‘build it to never break’. Instead, the application monitoring allows SaaS providers to make customer satisfaction an iterative process for a product. Why?  Because application monitoring allows you to not only track the performance of your product in the customer environment but also the oversight of operations and some valuable business insights that you can leverage to enhance your revenues. 

What are the different categories of metrics used to monitor such applications?

Broadly, the application monitoring metrics are divided into three categories.

The first and most important one is operational metrics. Generally, it covers the health of your services, all the related underlying microservices, and the interaction of the service in its environment. Operational metrics are usually real-time metrics that notify SaaS providers as soon as an application or any feature in the application ceases to function.  

The second category of metrics is business metrics which determine if your service is offering the value that you designed it to offer. Business metrics can be real-time metrics but they usually are not. They are measured over weeks and months to identify a trend. 

The third type of metric is performance metrics. An application may be running and providing the value it was meant to, but it may be taking a long time to load, or TCP handshake is taking longer than expected, such measurements are defined by the performance metrics of the application. Performance metrics are semi-real-time metrics. 

What are some good examples of operational metrics?

The basic metrics that SaaS providers can use for monitoring applications like operationality, CPU memory, or I/O usage are offered by the cloud for free. These are also metrics that don’t require any instrumentation in your code.

There are instances where metrics are measured through the use of instrumentation installed in your application. If the code crashes, these instrumentations will generate an alert for the undesired event across multiple microservices. 

Metrics that are threshold-based and can generate a notification and/or when the threshold is breached are also used in operations depending on your services. 

Operational metrics are layered in that order that you start small and add sophisticated metrics over that foundation. 

What are some top-tier performance metrics for application monitoring?

Page load time is a good example of how you measure the performance of your service. Similarly the time it takes the customer to submit a form hosted on your website is another. The time it takes for my ride-hailing app to enter an address and then the time difference to when the cab driver receives the notification are all performance measurements that vary depending on your service. 

For more sophisticated monitoring, metrics such as upon notification of an error, observing performance at a more granular level like TCP handshake time and SSL time, etc. are incorporated and you gradually build on top of simpler metrics. 

Does microservices architecture pose more complexity compared to a monolithic architecture?

Microservices may be harder to build but are easier to monitor because the metric that is experiencing error will have the issue limited to that microservice alone. 

Monolith applications are easier to build but are harder to monitor, simply because if something is rendered defective, debugging and addressing the problem can be very time-consuming and will affect every aspect of your business. 

What are your recommendations regarding an alert system?

The Cloudwatch monitoring and observability solution for application and infrastructure by Amazon is a very powerful system. There are multiple alerting tools like email notifications, SNS, and SQL notifications that are built into it. It also offers multiple ticketing as well as communication system integrations, on-call tools, PagerDuty, and much more. 

Other than that, Google Cloud MonitoringDataDog, and New Relic are a few SaaS providers of Monitoring Solutions. 

How do monitoring solutions help operations teams with correlation, triage, and root cause diagnosis?

As a SaaS service provider, you ought to know about a problem in your service or application before or when the customer complains. One application is supported by multiple microservices and identification of the performance issue and knowing where to find that issue before you can resolve it, is only possible through metrics for monitoring applications. 

Once you have identified the problem and its source of a performance bottleneck, you can start fixing that problem to bring the application back to its original functional state and that’s how triage, correlation, and root cause diagnosis help you with identification, mediating the problem, and keeping your service functional.  

Are there monitoring solutions where you are proactively looking and might foresee an event?

As a SaaS-based startup, your software stack is missing a key monitoring aspect if your system was unable to alert you hours ahead of your customer. Your customer informing you about an error should be the last resort. Usually, this last resort is seen in instances where SaaS providers missed or misjudged some metric and deemed it irrelevant once it broke in the customer environment and you weren’t notified. SaaS providers need to carefully choose their monitoring deck to avoid this oversight and monitoring needs to be done proactively rather than reactively. 

How much should the cost be for any application monitoring solution?

Cloudwatch, on-call rotation systems, and other metrics are not super expensive but can get expensive as your application scales and requires more and more components.  The human resources may be expensive, however, the actual dollar cost of setting up monitoring systems is generally pretty low. 

How do we monitor API endpoints?

For APIs, I always suggest canaries, simply because it is the easiest, fastest, and painless way to monitor your API endpoints. Every public-facing API should have multiple canaries that are continually testing it. There are numerous tools available, in the market, by cloud providers and third-party providers for testing API endpoints, in terms of security testing, function, and performance testing. 

Cloudwatch uses the concept of synthetics that allows you to set up a synthetic stack in another region from where you can continuously test your public API endpoints. 

How does a startup founder go about setting up a monitoring system and which tools are the best ones for it?

As a startup founder, it is easy to fall for the idea that more data means more results. I would caution against it and instead suggest starting small. For instance, start with CPU memory, focus on mechanisms and processes, and make a trial of when things might break. Install solutions for those points. Start with a few metrics preferably that can fill in one single screen and then focus on your mechanisms, your backend system, your paging, and the culture of operational excellence rather than the metrics themselves. Once you have achieved that successfully without oversights, evolve from there. 

What is the difference between agent-based and agentless monitoring?

In agentless monitoring, an agent is not required to be installed in the system for you to be able to monitor it. Systems that already exist emit those metrics automatically. Agent-based monitoring requires an agent to be installed when you integrate with an SDK and that agent collects all the desired metrics and emits them to your application backend. I recommend agentless monitoring but it can only take you so far. The agent-based approach provides much deeper insights. Take application performance monitoring as an example. Here, agent-based monitoring can allow you to passively monitor dom objects and measure the page load time via a short script that allows you to observe which part of the TCP handshake is taking longer, and so forth. 

This depends on how deep you want to observe the operations or performance and also on the feasibility of inserting an agent-based solution. 

What good vs. bad monitoring practices do you see in the market and what does the future look like for Monitoring Solutions?

For now, there are two strategies in the market: the new way of shipping software with monitoring and observability services vs. the old way of shipping systems that never break. For me, technology is easy, the people are the hardest part of the puzzle. But I have noticed that startups are stuck in legacy practices of not having visibility to their customer environment and their service performance. 

And as far as the future is concerned, the practice of monitoring your application is the future, where you can measure performance metrics, supervise the operational metrics of your application and observe business metrics for what features are providing value and which are not. All these metrics provide you with significant data to make agile decisions for your startup. All successful companies today, as well as in the future, will have these metrics and will be able to look each other in the eye and say we made a mistake and we’re going to change it because the data tells us otherwise. I think such courageous teams are winning today and they will continue to win in the future as well.

To Watch the Complete Episode, Please Click on This YouTube Link:

Read more

Supercharge your Product Development with Infrastructure as Code

Infrastructure as Code (IaC) is a dynamic and automated foundation for DevOps. But, as a startup founder, how much do you really know about IaC? 

We invited Abdullah Shah, Senior Software Engineer working with the Cloud Security Engineering Team at Salesforce, to enlighten us about IaC. Here is how it went down:

Why is there a need for IaC? What trouble will companies face if they don’t embrace IaC?

Without IaC, the IT infrastructure is configured, operated, and maintained manually. Historically, applications were developed using legacy IT infrastructure i.e. servers, storage, and compute. All these functions were provided using bare metal hardware in an on-prem environment. The configuration, operation, and maintenance were manually performed which meant high rates of human errors, delays, and demand for shedload members on the IT team. In addition, these manual processes exhibited lack of standardization of procedures and ineffective documentation, if at all. Collectively, this resulted not only in Environmental Drift but overall an inconsistent environment for application development and created even more challenges during scale-ups. 

Explain to us the concept of IaC, and why should the companies work with it? 

The irregularities we discussed in the absence of IaC, necessitate a more sophisticated solution for infrastructure to develop products. Infrastructure as Code (IaC) is that very revolution. 

IaC, in contrast to the legacy manual model, is a more descriptive model which allows the configuration, provision, and management of infrastructure using machine-readable files.

Infrastructure as Code
Infrastructure as Code

With automated Infrastructure, configuration, operations, and maintenance functions are now performed using scripts. As a result, IaC gives you performance consistency, monitoring accountability, operations agility, scale-up flexibility,  audit transparency, software security, and a much faster development cycle. Companies that have embraced IaC benefit from reduced operations costs and rapid time to market new products, services, and features. All in all, higher revenue and more satisfied customers. 

As a Start Up founder, what steps do I need to take to embrace IaC?

I would advise promptly embracing IaC or at least outlining the roadmap to immediately focus on embracing it. 

The first step is to evaluate the infrastructure requirements for the products and/or services you offer. Secondly, with multiple options available, make a categorical decision on your tech stack according to your product. 

Startups usually want to build and push products to the market rather quickly since they believe it’s merely prototype(s). However, I would encourage you to create a solid foundation with the right IaC principles and to implement IaC from the ground up. With a strong footing, scaling beyond the first 2- 3 servers will become streamlined and efficient with IaC. 

Is IaC required to have DevOps in a Start Up? How are they related? 

DevOps and IaC go hand in hand. One would not exist without the other. Although all companies apply the DevOps principles to varying degrees, the most popular is the Shift Left approach which is synonymous with the service ownership model. The idea is that the developers in IT teams are not working in silos but collaboratively, to create a holistic view of the entire application development lifecycle. In this spirit, developers are not only responsible for application development but also for creating the right infrastructure for the operations and deployment of the application as well. This means that the responsibility of coding has been fanned out among the IT team members. Testing and monitoring roles have been shifted left to the developers and all of this is enabled by IaC. 

Do I need to test and monitor IaC?

There is no substitute for testing and monitoring IaC; it practices more stringent test requirements. Infrastructure can be automated flawlessly with the use of correct IaC templates and avoiding misconfigurations. An array of testing tools are available for you to choose but the fundamental notion of testing being critical, for IaC, cannot be overstated.

With QA functions disappearing in pure DevOps culture, who would carry out these stringent testing and monitoring?

In pure DevOps, CI/CD has automated IT testing and operations which in turn accelerates the process of application deployment exponentially. This results in continuous updates to your application and infrastructure. Now, if you don’t have automated testing, you are in a heap of trouble. At this rate of deployment, humans can not keep up and companies must implement automated testing strategies throughout the application development supply chain. 

How would you address the IaC related fear of automation, in industry?

The idea is simple. If you anticipate failure, you can prepare for it and then mitigate it. The preparation you need is to implement a robust testing strategy and it is equally important to have a control feedback loop to continuously audit the strategy and improve. The ultimate goal is the provision of the right environment that doesn’t miss any build failures. Failures, caught and addressed in the next loop, will allow streamlined product deployment moving forward. 

Explain the difference between declarative and imperative approaches for us.

The difference is that Declarative is a functional approach and Imperative is a procedural one. Declarative IaC approach answers the question ‘What is the preferred state’ and Imperative IaC approach answers the question ‘How is the preferred state achieved’’. Since it is critical to maintain your infrastructure in the desired state, it is recommended to use a declarative IaC approach. Imperative IaC approach relies on control loops and conditional statements which can become a complex endeavor. 

Some supporting tools are available that create an imperative layer over the declarative model to provide an abstraction. Pulumi, for instance, is one such tool that is self declarative and can provide an imperative layer. Amazon CDK and Terraform are other examples of such tools that provide the best of both approaches. 

Which of these approaches in your opinion can help the companies with their tech debt?

Traditional practices slow down the application development cycle and can lead to technical debts. In unpredictable cases e.g. immediate customer requirement, badly written code, or new feature request, the right automated testing strategies are your only way out of incurring technical debt. That is exactly what IaC promises. It creates those guard rails around your processes that reduce technical debt. 

Mutable and Immutable infrastructures, which one would you recommend? 

During application development, changes are inevitable. Your infrastructure will need to be scaled up or down, updated, and/or patches will have to be applied. Post-deployment, if the infrastructure can be changed, it is mutable infrastructure and vice versa. In case of immutable infrastructure, the changes are rolled out on a replica new machine before the old immutable infrastructure is taken down. There are multiple ways to go about it. However, when in doubt, go with mutable infrastructure. 

In terms of tech stack, what are the best tools to implement IaC?

There is a spectrum of choices available, as discussed earlier. Amazon CDK, CDKFT, CloudFormation, Terraform, and Pulumi are all tools used to implement IaC. I suggest democratizing the decision among developers, SREs, and stakeholders since tech stack is not only meant for IaC but the entire application development pipeline is orchestrated. For developers, you have version control using Git; for operations CI/CD, AWS pipeline, GitOps; for code build service, there are custom Jenkins tools. Argo CD is a popular tool that operates specifically with Kubernetes-based deployments while Spinnaker allows you to deploy custom IaC technologies. However, the decision depends on your use cases and what is required to implement them.

My recommendation for IaC, is Terraform, as they offer one of the best tools, have an early mover advantage, a vibrant community, simplified syntax, and well-written documentation.

Does IaC help with the provisioning? 

For Sure. IaC, much like software development, is version controlled in a centralized repository. Any update or change in features will be validated, tested, built, and pushed to required registries automatically. These automatic processes present a holistic development pipeline. All these tools come together to facilitate everything from integration to deployment and provisioning is definitely one aspect of the whole picture as well. 

About IaC On Cloud Vs IaC On-Prem; your thoughts?

On-Prem infrastructures are heterogeneous when it comes to provisioning identity services, secret services, and/or network security. Limited automations can be performed on On-Prem environments. Most services have to be custom-built and are not scalable. On-Cloud, on the other hand, is standardized in service provision. Infrastructural resources become more flexible to scale up or down as required. Documentation best practices, abstractions at hand, repeatable and predictable environment are some of the factors that put IaC On-Cloud in a league of its own and hence offer more value. 

What IaC profiles should I look for, as a StartUp founder? 

Broadly speaking, for pure DevOps, Developers and Site Reliability Engineers (SRE) with the mindset of service ownership model are increasingly in-demand skills in Silicon Valley and across the world as well.

Tell us about the best practices for IaC in the industry?

To build an application infrastructure specifically in the agile world of IaC, the best practices require a set of foundational services at the periphery as well. For your company, best practices mean the acquisition of full package of foundational services i.e. Compute, storage, database, networking, security, all of it. 

What does the future hold for IaC?

IaC has already saved a lot of money for different companies. It has sped up the process of software development from integration to deployment and ultimately delivers value to the customers. IaC helps create and serve those customers in a very fast, agile, and repeatable way. 

In terms of the future, we already know manual infrastructure provision can not scale or perform at the same pace as IaC. Hence, lost customers and technical debt are natural outcomes. To enable those fast, repeatable deployments, and product iterations, IaC is the inevitable future and I very strongly believe that companies are and will implement full robust, orchestrated infrastructure, and pipelines at the heart of their businesses. 

To Watch the Complete Episode, Please Click on This YouTube Link:

Read more

How to Get Started With DevOps: Queries of a Startup Founder

Cloud DevOps

As a startup founder, why should you know about DevOps? To answer that question, you need to avoid the most common mistakes similar ventures make when they start. Financial concerns aside, most startups experience challenges with timely product release, team collaboration and processes establishment, product quality, and customer satisfaction, to name a few. What if we were to tell you that all notable corporations are using one key concept to address the above challenges, and you can have a head-start if you instill that key concept into your startup from conception? Yes, you guessed it right! That key concept, being heavily adopted, is DevOps.

To answer all the above questions, we invited Ali Khayam to our Xgrid Talk series. Dr. Ali Khayam is currently working as a GM-SDN and Internet Services at Amazon Web Services (AWS). Being an expert on the subject, Dr. Ali sat down with us to help us navigate the concept of DevOps. 

What Is DevOps?

The split ownership model that the software is built, tested, and operated by separate teams is debunked by DevOps. In DevOps, you build it, you run it. The concept has been in practice but it became mainstream with the introduction of cloud technology. Since the servers and networking infrastructure are no longer managed by the developers on site, it is reasonable for the developer team to run the software in production as well. 

What Is CI/CD?

CI stands for Continuous Integration whereas CD stands for Continuous Deployment. CI/CD is the tool that enables the DevOps philosophy to be implemented. Continuous Integration is an automated build and test process. It is a huge improvement on the manual testing which was a very tedious procedure with enormous test plans and spreadsheets. Continuous Integration has eliminated the requirement of dedicated quality assurance teams. With CI, the software developers are the ones that create automated tests for each check-in. This allows for each commit to be tested before becoming a possible cause of regression for your existing software. If any of the commits are not working as expected or any packages are not built, CI will prohibit merging that commit to the repository. CD will then deploy the software without human intervention. 

What Are the Benefits of DevOps?

Rendering quality assurance and deployment teams unnecessary is the most cost-effective benefit for an organization. DevOps also reduces time to market and enables more ownership among the development team since they oversee the end-to-end process of development.  

What Steps Does a Startup Have to Take To Implement DevOps?

If a startup has not yet been using the DevOps methodology, the team needs to be brought on board. First, they need to understand that their role is no longer confined to the development of software and unit testing but they need to build functional and system tests as well. 

The automation of the CI/CD pipeline will require the organization to choose from an array of tools already available. Depending on your use case and where your workload is hosted, plays a vital role in selecting a tool. For example, most public clouds offer inbuilt CI/CD options. For data centers, you have Jenkins among other options. In the case of a SaaS platform, you have CircleCI, TravisCI, etc. for CI while Argo CD, Flux CD, etc. for CD.

It is a common misconception that DevOps cannot be opted until monolithic architecture is dissected into microservices. If test frameworks are not available, it will be worthwhile for startups to hire developers that can develop those testing frameworks to automate the monolithic architecture. 

Once the right tools for your CI/CD pipeline have been identified and implemented, the next step is to iterate the process to make the product better and/or scalable for your application architecture.

What Employment Profiles fit a DevOps Team?

In order to understand the role of a DevOps team, we need to understand the software stack and its operations. Applications run on an infrastructure and the DevOps software stack needs a host for this infrastructure which provides the orchestration layer for the developers to write application software.

This is where DevOps engineers come into play. However, all developers should be DevOps engineers where they should build, test, and deploy the application. Apart from DevOps, other profiles are Site Reliability Engineers (SREs) and Infrastructure Engineers. 

The infrastructure engineers manage the IaC. Developers write the application and their own test cases. The responsibility of the performance of the code resides with the developer. The role of SREs has cross cutting concerns, it is to make sure that your entire infrastructure and all the applications are working as expected.

What Is the Path to Transition From Traditional Development to DevOps?

If you’re just starting, the fastest and easiest way to get familiarized with the concept of DevOps is using a cloud-based solution or a SaaS product. The models built in these platforms are portable and comparable to the legacy infrastructure. 

What Key Performance Indicators (KPIs) Do You Have in Place for DevOps?

The KPIs remain the same for traditional application development and DevOps application development e.g. transaction per second, latency of API calls, and such. The DevOps methodology applies to the way the development is done by changing the responsibility from multiple teams to one team so KPIs for both environments do not vary significantly. 

Infrastructure as Code (IaC): Is It the Underlying Theme for DevOps?

IaC is a widely adopted practice. It helps remove misconfigurations from the product deployment process. With IaC, the scalability and replication of the stack has become effortless, in contrast to legacy systems where the time and error-prone human elements were part of the parcel. 

Every cloud service provider or SaaS offering provides you with an Application Programming Interface (API) service. The developers write code on top of that API surface. The next time if the startup needs to replicate that stack, only the name of the region has to be changed and the same stack will be up and ready in another region. The flexibility that comes with DevOps using IaC is massive in scale; the extent of which can be seen in the fact that these high-level abstractions are available in simple config-based languages that allow the API calls to be seamless. 

What Tech Stack in DevOps Development Do You Suggest?

The Tech Stack selection is a decision that is based on where your startup has its workload. If it’s on Cloud or AWS to be specific, Cloud Development Kit (CDK) and CloudFormation are two native options. However, CDK is the new generation which is recommended. 

Puppet, Chef, or Ansible automations are other options available on the cloud which are feature-rich. They are quite similar in terms of functionality. Ansible is an SSH infrastructure which is easy to set up and manages everything through SSH commands and provides an automation of abstraction on top of that.

Puppet and Chef are more full-blown languages that have their own management channels and offer separation of config from everything else in code. In conclusion, it really depends on what employment profiles you have and how long do you have to bootstrap the development process. With IaC, it is recommended to select it carefully, keeping time, effort, and skill sets in your hands. It takes longer to onboard developers on Puppet and Chef but they offer more flexibility.

Your use case factors in while choosing an automation for your development process. Whether it is a front-end or a back-end application and what languages are being used for the application. 

In case of CI/CD, you make a decision based on if you are developing fully on-prem or are using cloud-hosted facilities or have a hybrid infrastructure or what your budget allows you. Jenkins is cheaper,  hence a good place to start. Deploy to CD frameworks, many of which work on top of Jenkins seamlessly. Jenkins offers its services on cloud as well. Nevertheless, clouds offer their native options such as code deploy or code pipelines as well. 

For cloud management specifically, the native cloud-offered services are recommended because they provide a much richer experience as they are better integrated features. The application management and performance monitoring can be done with external sources but for Cloud itself, the native solutions are best.

For security, since it is a layered construct, I would recommend taking full advantage of cloud-native security offers at the base. On top of it, there is a communication layer with anomaly or intrusion detection. There are SaaS and cloud-native solutions available to tackle this and can be chosen keeping your startup in mind.

How to Bring Culture Change in a Startup That Started With Legacy Architecture?

If you inherited a team of developers with legacy architecture, the cultural change as well as structural shift in the team is inevitable. Change mindsets. Hire strong team players who can build automations and improve the performance of the development process in a timely fashion.

What Has Been the Adoption Rate of DevOps Among Corporations, SMBs, and Startups?

The market segmentation is not based on the size of the company but how long a company has existed. New companies find it easier to start with DevOps. The more established companies that started with legacy architecture, however, are slow in their transition to DevOps. Companies such as AWS and Netflix are exceptions to this general trend. 

While transitioning from development and operations team to a consolidated DevOps team, start with the QA team and ask them to automate their operations. They can learn how to build and integrate every check-in and then can be integrated with the development team.

Has DevOps Made It Harder for Employers to Find These Employment Profiles?

DevOps has made the process of application development faster. I would say, the developers’ mindset hasn’t shifted as quickly as the technology. This gap exists because universities have not changed their curriculum. The graduates are not familiar with DevOps terminologies. 

What Would You Recommend to a Legacy Engineer to Learn to Be Reskilled?

All the SaaS, Cloud, and CI/CD companies have built a lot of training material around their services. So pick your technology stack and get started!

Three Pieces of Advice for Startups Who Are About to Start Their DevOps Journey

Start early. Invest your time and effort into building quality IaC and testing frameworks even if it means your first 2-3 rollouts take longer. Worth it. Secondly, be very clear on the KPIs of your success and the metrics of your product development health and output. Lastly, iterate and keep the metrics and parameters tight throughout the journey. Do not let the culture corrode around the edges. It is easy to regress to legacy methods if you are not vigilant throughout your DevOps journey. 

To Watch the Complete Episode, Please Click on This YouTube Link:

Read more

Migrating DynamoDB Data Across AWS Accounts

Database migration is one of the most critical and common aspects of cloud migration activities that DevOps engineers and cloud experts encounter on a regular basis. DB migration can also be one of the more complex problems to solve in such cases, since in spite of its frequent occurrence, a straightforward solution is not always readily available for it, due to a varying number of use cases and requirements for each customer application. For these cases, DevOps engineers need to think outside the box, be innovative, and develop a custom solution to fulfill the criteria for their specific use case.

In a similar occurrence, a recent development effort for one of our clients called for the migration of DynamoDB tables between two AWS accounts. Considering the extensive catalog of services and functionalities offered by AWS, one would assume they might have provided an inherent functionality to export the table backups to another AWS region or AWS account, similar to what is currently possible for RDS. However, I soon found out disappointingly that this is not possible. So I began my research for migrating DynamoDB data across AWS accounts.

I assumed that it would be easy to implement as you can probably create a small script to fetch all the data from a source table and make queries to add the data to a destination table. After hours of scouring Google search results, GitHub repositories, and Stack Overflow threads, I was unable to come across an appropriate solution that would work for my use case. The ones that I found struggled to handle tables with a large amount of data. In my case, I was dealing with a specific table that had approximately 200,000 items.

The AWS recommended solution for migrating data in this scenario contained a two step process where data was first exported to an S3 bucket. From this S3 bucket, the data could be either copied over or exported to another S3 bucket in the destination AWS account with the necessary permissions configured. The data could then be imported from that S3 bucket to the destination DynamoDB table to complete the migration process. In theory, these steps seem simple and straightforward right up to the point where you figure out that AWS has not provided any easy way to import data from an S3 bucket to DynamoDB. They do provide a way to export data from a DynamoDB table to an S3 bucket, but in order to import the data, the recommended approach is to use AWS Data Pipeline.

AWS Data Pipeline is a service that sets up an automated pipeline that can be run manually or on a schedule and the pipeline itself utilizes an EMR cluster to perform data migration and data transformation steps. The problem with this approach is that it is not easy to set up and it would definitely incur extra costs on the AWS account as it would deploy some resources in the EMR cluster which are going to be charged for the amount of time they are up and running.

Nevertheless, even with the already provided template to import DynamoDB data from S3, I was not able to setup AWS DataPipeline successfully nor could I get the logs to work in order to figure out what was wrong. At this point, I started looking into alternatives since it seemed that this solution would require more effort and time to make it work. 

A few suggested solutions involving custom Python scripts and Node modules simply fetched the data from a table and added all the entries to another table. This solution did not require the use of any additional AWS resources. So far so good. This seemed like a promising lead. However, as I proceeded with this solution I realized that it started to struggle at scale, with the migration time increasing for tables with more than 200,000 entries. It took around 3-4 hours to transfer 50% of the table entries, which was definitely not ideal. I needed a more optimized solution.

I finally decided to write a script of my own that utilized the asynchronous nature of NodeJS to achieve the desired functionality. The approach I used was to first fetch all items from the table using multiple scan calls until all of the table entries are fetched. I then proceeded to use the BatchWriteItem call to add items to the table; this call imposes a max limit of 25 items at a time. To cater for this limit, I divided the table entries into batches of 25 items and executed the BatchWriteItem call for each batch in an asynchronous manner so that the script does not wait for the response of one batch call to send another one. This approach greatly reduced the execution time of the script. It transferred the data from the table with 200,000 entries within 6-7 minutes, instead of hours.

The next problem I faced was that this BatchWriteItem approach was not certain to process all items and according to the documentation, it sometimes returned a list of unprocessed items. For these unprocessed items, I had to send a request again, which was also done asynchronously. The script would retry batch write calls on all the unprocessed items and wait for all the calls to be completed before checking if there are still some remaining unprocessed items. This process was repeated until all items were processed successfully and all entries from the source DynamoDB table were migrated to the destination table in a different AWS account. In between each retry call, an exponential backoff algorithm was implemented, a recommendation from AWS documentation. The algorithm works by introducing a small delay between each retry call and it would double the delay time after each retry. For example, if we start with a one second delay after the initial attempt at retrying the batch calls, then before the second attempt, there will be a two second delay, and in the next retry, there will be a four second delay, and so on.

For better understanding, the diagram below shows the complete workflow of the script:

In order to save fellow developers out there facing a similar problem, we have decided to open source the code for this interesting, unique, and highly efficient solution. The script has been developed by Xgrid in collaboration with our partner company, copebit AG. The code for the script is available in this GitHub repository.

We are planning to further optimize this script in the future and will also publish an NPM package for this solution so that it is modular and simple enough to be used by anyone. Further enhancements will be adding a CLI tool within the NPM package to make it even easier to consume.

Our team regularly faces interesting scenarios in our day to day activities while developing custom solutions and applications for our customers on AWS and other cloud environments. We plan on writing a series of blogs on other similar solutions that we have developed and are open sourcing in the future to share our exciting experiences and insights with other developers so that they can take advantage of these tools or solutions and enhance them for their own use cases and requirements.

About Xgrid

Xgrid is an IT services and cloud consulting company that has been working in the areas of Test Automation, Continuous Integration/Delivery, Workflows and Process Automation, Custom Application and Tool Development (with end-to-end UX design and UI development); all tied to private, public and hybrid clouds since it was founded in 2012. Xgrid specializes in the above verticals and provides best-in-class cloud migration, software development, automation, user experience design, graphical interface development and product support services for our customers’ custom-built solutions.

For more details on our expertise, you can visit our website.

About copebit AG

copebit AG is an innovative and dynamic Swiss IT company that focuses on Cloud Consulting and Engineering. Besides their requirement to always master the latest Cloud portfolio, copebit also offers project management from the classic way to the versatile world of agile methods.

For more details on their expertise, you can visit their website.

Read more

Software Quality Assurance – A Critical Cog in the Software Clockwork

Did you know that Quality Assurance (QA) dates back to the Medieval Guilds of Europe that developed and enforced strict rules for product quality? It wasn’t until World War II when statistical quality control techniques were implemented to test ammunition for performance. Read it for yourself. Nowadays, it is assumed that the only purpose of QA is to report potential errors at the end of the development cycle just before the product is released: a common misconception.

While textbooks and academic courses might tell you that Software Quality Assurance (SQA) is a manual or automated testing process at the end of the development cycle to ensure a quality release/ product, this is just not true. An increasing number of conglomerates and companies have realized that SQA is a mindset which introduces several activities and processes throughout the development cycle. It is a continuous and ongoing cycle which is deeply integrated with software development to guarantee high quality. Contrary to popular belief, SQA engineers do not just test systems and report bugs.

“Software quality needs to be seen at a wider perspective relating to “goodness”. Looking beyond the explicit requirements into the realms of product risks and investigation of unknowns, determining the amount of delight or despair the customer will have while using the software. That is to say, the scale of how good (or bad) the software is to use”

– Dan Ashby writes in his blog.

On top of corroborating a bug free release/ product, SQA practices make sure that the software being developed meets the quality standards of coding practices, adheres to defined compliances and general industry standards, all-the-while meeting the client requirements.

For an effective quality assurance of the software, QA engineers are involved in the planning stages of the Software Development Life Cycle (SDLC). They are part of the process wherein the customer requirements are analyzed and the implementation plan is developed. Early involvement and continuous participation of SQA engineers warrants an effective and comprehensive testing. They explore ways to automate the test cases while the feature/ software is being developed. Additionally, these experts also ensure code and product quality, and maintain records and documentation. Furthermore, they manage the impact of changes in the software which includes updating the test plan to incorporate the changes. Thus, SQA is an important cog in the Continuous Integration Continuous Development, and Continuous Deployment (CI/CD) process.

An effective SQA process reduces the number of defects in the final product, guarantees stability, and satisfies client requirements, resulting in an improved customer satisfaction and decreased maintenance cost of the product. The QA process lessens the chances of re-work, provides a sense of predictability, and saves time and money. 

How Our SQA Experts Do It

Among many ways to ascertain the quality for any product or release, there are methods in which Xgrid’s Quality Assurance experts ensure premium quality deliverables:

Shift Left Approach

Shift left testing is quickly becoming the industry standard wherein the QA team is involved in the software development cycle from the very beginning. Our testing and validation cycles are initiated concurrently with the analysis of client requirements. We follow this approach to identify bugs and issues at an early stage. This tactic reduces the development cost, thus increasing efficiency, customer satisfaction, and drastically expediting the development process.

Communicate Effectively and Timely

Communication is the key. While delivering a quality product in a short time-span, Xgrid makes sure that the engineering techniques are paired up with effective and timely communication to accelerate productivity and efficiency among the SQA and Dev teams. In all phases of SDLC, Quality Assurance and Developers’ teams at Xgrid are kept well-informed of any changes or new requirements. Our prompt, clear and focused feedback results in minimized confusions and promotes professional growth. 

Take Your Time to Test

Xgrid’s QA team does not test the application in haste. Instead, we take our sweet time with it because we realize that ensuring the best quality of a service/ product is not a race to be won. Rather, it is a continuous progressive cycle. Rushing the execution of test cases in a crunched timeline can have a negative impact on the quality of deliverables. We carry out Smoke and Sanity test cycles to make sure that major features are not broken. Additionally, minute details and non-critical functionalities are tested as per the available timeline.

Be the Tester End User

We, at Xgrid, wear the tester’s hat when testing an application’s functionality. However, when we  test for usability, our experts walk a mile in the end user’s shoes. Feature development and testing go hand-in-hand. We realize that if our testers do not use the developed feature as per the end user’s needs, the quality of the deliverables may be affected. 

To put things into perspective, let’s take a simple login page example: The user enters login credentials to proceed. The validation checks are added. The “Forgot Password” button works fine. Everything seems to be okay. But there is a high chance that when the user presses the back button, the credentials previously entered are pre-filled on that screen i.e. the fields have not been cleared.  In this case, the feature development is complete, validations are added, and the feature itself is QA verified. But this scenario, which is a normal end user behavior, leads to serious security threats.

User interaction with the feature is as crucial as its proper functioning. For this matter, the internal QA builds are tested as release candidates and the application is thoroughly tested keeping business needs in mind. 

Test the Product According to Its Maturity

Product maturity plays an important role in testing. A growing product undergoing major changes needs to be tested in all aspects. With repetitive test cycles, we radically reduce the number of bugs which results in a stable quality product. We follow the three stages of product maturity defined in Leading Quality

Validation: At this stage, the product is rigorously tested to be a good market fit and be stable on its own. Our approach here is to test the major user workflows without any critical bugs. We opt for manual testing in this phase since automation is not a priority because of the high development cost.

Predictability: This is the  stage where a product is stable in its major workflows and growing in terms of its users. The product becomes predictable at this point and therefore, this is the right time to predict any bugs that may occur due to development in the future. Our testing approach is detailed and exploratory. Automation is also introduced to run the regression test cycles.

Scaling: In this phase, the software is growing in the existing user scale. Even a minor bug can adversely impact a large number of users. There is an increased focus on scale and load testing. We test the product in detail to avoid even the smallest bugs and to increase its performance. This is a good time to look into the optimizations of battery, CPU and GPU consumption. Effective QA strategy is crucial at every stage, and given the nature of the product, a combination of these test approaches is also used.

SQA is a systematic process very similar to the software development life cycle. Development and testing cycles must be defined early on. The endless tasks of the SQA engineers are inclusive of but not limited to quality inspection, test case automation, code and product quality checks etc. They are the champions of monitoring every phase of the development process whilst adhering to company standards. Xgrid is a software company that delivers agile end-to-end testing solutions which reduce costs and increase efficiency, ergo, we deliver better digital products faster. So if you want to enhance your product quality without losing momentum, contact us at

Read more

Native or Cross-Platform Application Development: That Is the Question

Plethora of app options these days, no? Nowadays, our smartphones carry a profuse range of applications for just about everything. Gone are the days of hanging billboards, hailing a cab, or searching for rental apartments in the Sunday newspaper. Today, there are 4.4 million apps available on App Store and Google Play to meet our everyday requirements and make lives easier.

“90% of Time on Mobile is spent in Apps”


Developing an app is not a walk in the park which is why it requires a robust course of action and copious problem-solving. If you’re a first-time developer about to delve into mobile app development, the first and foremost decision to make is to choose the right platform for your app. This has long-term implications in terms of feasibility, functionality, performance and scalability.

There are two primary approaches for any mobile application: native, and cross-platform. Native app development refers to building a mobile app exclusively for a single platform in a compatible language. For example, a native Android app would require developing with Java and/ or Kotlin. A few app examples for native Android app development include Slack, Lyft, and Evernote developed in Kotlin. For an iOS app, however, you would use Swift and/ or Objective-C. A few app examples for native iOS app development include LinkedIn, WordPress, and Firefox developed in Swift.

Cross-platform app development intimates the process of creating an app that works on several platforms simultaneously. This is achieved via frameworks like React Native, Xamarin, and Flutter, where the product is deployed on Android, iOS and Windows. A few app examples for Cross-Platform app development include Artsy, Bitesnap, Bunch in React Native; Storyo, Insightly, Skulls of the shotgun in Xamarin; and Google Ads, Xianyu by Alibaba and Hamilton in the Flutter framework.

In 1995, Sun Microsystems created a slogan “Write once, run anywhere” (WORA), or sometimes “Write once, run everywhere” (WORE), to illustrate cross-platform benefits of Java language. True story!

Each approach comes with its own baggage of boon and bane.

So how do you take your pick? Dictated herein are the crucial elements to help you choose one over the other:

What to Consider When Choosing an Approach to Build Your Mobile App

  • Application Complexity and Platform Dependence

    If you are developing a complex application that requires access to low-level APIs like Bluetooth, you’ll want to go with native approach because it guarantees (theoretically) zero limitations on said platform. This is because it is easier for a native application to interact with a specific OS and the device’s hardware. Therefore, getting access to all the services of the device is quite convenient.

    However, if it is an application that does not require access to complex OS-specific features, then cross-platform development is a good choice considering the features of the chosen framework do not pose restrictions.

    Noteworthy: Facebook and Google have launched powerful cross-platform app development frameworks namely, React Native and Flutter respectively, thereby drastically bridging the gap between native and cross-platform applications, making the latter approach a better fit for a much larger scope of applications.

  • Development and Support Feasibility

    The time it takes for you to make an application is significant, especially when you’re on a tight schedule. It’s essential to decide the best framework to utilize when time is of the essence. If you have a short deadline for a relatively plain app, consider cross-platform development. As mentioned earlier, you do not need to work on two separate versions for the application. Instead, a single cycle of development is needed for an app to be released for Android and iOS. On the contrary, native app development will take twice as much time thereby lagging behind in schedule.

    Companies often require a Minimum Viable Product (MVP) for their B2B or B2C apps in the nick of time. Xgrid has worked with such clients and delivered profound applications in a very short time span.

    Choosing an approach depends vastly on your budget as well. Complex native applications cost more to develop and maintain compared to their cross-platform counterparts. If you have a limited budget to work with, cross-platform development is an ideal choice. You’ll save around 30%-40% since only a single codebase will be created for an app that works on both Android and iOS.

  • Performance and UI/UX

    Application Performance is crucial to the success of any application. A decisive factor in good performance of an app is its speed. Native applications offer high speed and better performance, however in some cases the cross-platform approach allows for significant reduction in development cost and time without deterioration in user experience

    “Statistical research shows that an average user won’t wait more than 3 seconds for an app to load.”

    Source: LitsLink

    Nevertheless, if your product mandates outstanding user experience, performance and feature-richness, go for native development. Xgrid has developed an enterprise-level native iPad application, currently deployed in a production environment, engaging around 700 users. The app has a variety of features such as task management and logging, clock ins/outs, daily site reports, and employees’ certification etc. The app is designed to work in both online and offline modes.

    For some audiences, user-friendliness is directly correlated with a complementing app and device interface. Here at Xgrid, our developers find the optimal solution to this problem:

    1. Native Approach: The developers coordinate their actions such that the interfaces of iOS and Android app versions are as identical to the underlying platform as possible.

    2. Cross-Platform Approach: The developers make sure all application elements are recognizable and the product interface itself is intuitive and user-friendly.

  • Audience Reach

    Cross-platform and hybrid applications allow you to reach a wider audience than native programs since they’re targeted at several platforms at once and you’ll be able to establish contact simultaneously with iOS and Android users. As a result, you get a chance to increase your user base in the shortest possible time. This, in no way, implies that native applications do not offer reach on multiple platforms at all. They do, but they take a bit longer to reach the audience on both platforms because their Android and iOS versions are deployed in different timelines.

When to Choose Native App Development

To summarize, pick native development if:

  • You want to take full advantage of the mobile phone’s hardware, resources, and services

  • App responsiveness is uncompromisable for you

  • You want an app that can be easily updated and enhanced with new features in the future (in one platform)

When to Choose Cross-Platform App Development

Opt for cross-platform app development if:

  • You want to maximize your reach to the target audience concurrently in multiple platforms 

  • Your application requires an extensive use of third party plugins or integration options

  • You want to test an app blueprint in the market or promote it in a short period of time

That’s a lot of big words, we know. If you are still unsure about the best approach for your application, we can assist you in reaching a decision by taking you forward step-by-step. We realize that each application is unique in nature and needs a special approach. Therefore, we facilitate our customers by modeling a feasibility report taking into account all the features of a particular project, and give advice and consultancy based on these premises, all-the-while keeping in view our client’s budget, time and need for reach. We are a team of highly qualified developers in both iOS and Android platforms who will analyze your case and recommend what is best to choose: iOS and/or Android native development or cross-platform approach. 

Want to get in touch? Drop us an email at

Read more

Microsoft Dynamics Business Central Development – Technical Deep Dive

As we discussed previously in our introductory blog post Business Central – A Modern Solution That Integrates Easily, Microsoft Business Central is an all in one solution with built-in business integration, providing a single comprehensive solution to meet the needs of your growing business. However, it also comes with some challenges, one challenge that was recently faced and overcome by the Xgrid team was unsupported functionalities in the AL language. However, Business Central allows for the development of extensions using .NET interoperability. As the name suggests, these extensions provide flexibility and scalability to the Business Central solution by allowing for extended and diverse functionalities.

How do Extensions for Dynamics NAV Work?

With NAV Extensions, you can add functionality without changing the standard solution from Microsoft. This has the obvious advantage that major NAV upgrade projects are no longer necessary. Once you are using Extensions, the customizations no longer represent a problem when upgrading to the latest version of the solution.

Extensions for Microsoft Dynamics follow a model where you define the functionality as an addition to existing objects. This is how extensions can modify objects to perform business operations. We can develop multiple extensions so that development concerns are isolated and modularized but there is still a limitation under which these extensions perform that is the challenge.

The Challenge

All the functionalities from Dynamics are inherited to extensions but what if a development use case needs certain functionalities that are beyond the scope of Dynamics?
Here comes the Add-In Development for Business Central to rescue. Which provides a clean way to integrate .NET Framework assemblies to Dynamics NAV Server. This opened a lot of possibilities and eventually helped us drive the development smoothly.

One of the many use cases we had was to leverage Microsoft Active Directory user permissions to perform File Operations (Create/Update/Delete) on a Shared Network Drive. There is no built-in support for Active Directory in Dynamics NAV so we used our home-made .NET assembly which encapsulated all the critical File Operations. The basic setup for such Add-In development is provided below for reference.

Add-In Development – Technical Deep Dive

You can take advantage of .NET Framework interoperability so that Dynamics NAV objects can interact with .NET Framework objects. In your extensions, you can reference .NET Framework assemblies and call their members directly from C/AL code.

For easier understanding, we will develop a custom .NET class library that exposes a single method `CreateFile` which writes `Base64` encoded file to a specified location. Example code is provided below:

Now once you have the compiled DLL available, it’s time for Dynamics NAV .NET Interoperability to play its part.

Integrating .NET interoperability consists of the following three steps which are discussed in detail below:

1. Declaring the Assembly

To integrate .NET interoperability, the AL compiler needs to be pointed to the custom .NET assemblies. By default, the AL compiler is only pointed to the location of the default .NET assemblies. You need to explore the `settings.json` file and add your assembly path.

After the reference directory is added in `settings.json` you must declare the type in a `dotnet` construct provided by AL Language, and then reference it from code using the DotNet Variable type.

2. Execute members from .NET Assembly

After `dotnet` reference is created in AL, It’s time we can utilize the functionalities provided by our home-made assembly. We now need to create a `codeunit` which exposes an AL procedure

3. Publishing the extension

You now have everything in place, Just build the extension and you are almost ready to deploy the extension to your production environment.

When publishing an extension, the server will recompile code and try to resolve all the references to external assemblies. The compilation will only succeed if the server can locate and load all the referenced assemblies and types. Therefore, it is recommended to keep the same path for custom assembly on the development and Production business central environments.

Key Takeaways

Dynamics Nav has been helping customers improve their businesses by providing valuable insights. At Xgrid, it helped us expand the existing functionality through customization by adding more functionality using business central extensions. Microsoft Dynamics has a variety of features to boost productivity and is a fast, easy to use, and reliable solution to improve the overall efficiency of the business. Microsoft Dynamics improved forecasting, scheduling, management, and scalability with all other integrated applications. 

With business logic in its DNA business central has made many businesses survive making it a broad and deep functionality that runs thousands of companies all over the globe. To evolve and progress all contemporary business models should shift towards Business Central’s up-to-date solutions.

Read more

Sales Enablement 101

Let’s set the scene. Your company has just launched a disruptive networking solution that is set to shake up the industry. Big names like Apple and Facebook are showing genuine interest and can’t wait to get their hands on it. There’s a lot of buzz around the entire affair. They want to see the product run their specific use cases so they can be sure it fits their requirements. However, it takes you a couple of weeks to set up the entire thing end-to-end to demonstrate one use case. Your competitors have a similar solution and a much faster turnaround time, and they gain the upper hand. The customer goes with them instead and the sale is gone.

One of the biggest challenges businesses face, especially with products requiring a tedious and complex setup procedure, is effectively managing and completing a customer sale. In this case, bringing a potential customer on board, understanding their set of requirements, and setting up a demo or Proof of Concept (POC); all of this would usually take days if not weeks and runs the risk of your potential customer being poached by your competitors. 

What can you do to reduce the time for your overall process? The answer is a Sales Enablement Tool. Since the deployment time of an end-to-end actual physical setup can’t really be reduced, the best way to make the process faster would be having an application or tool that can emulate the customer’s use case. A Sales Enablement Tool or a Sandbox can allow your customers to experience the feature set that your product offers in a customized, tailored environment. Most companies either lack the resources or have a large amount of corporate red tape which prevents them from developing a Sales Enablement Tool for their product. This is where Team Xgrid enters the equation to provide our expert services in developing a fully customized sandbox environment for your product. However, before discussing what Xgrid can do for you, let’s first briefly talk more about the value a sales enablement tool can add to your business.

What is a Sales Enablement Tool anyways and why do I need one?

As the name suggests, a sales enablement tool is designed to empower the sales and marketing team. It gives them incredible versatility in adapting to their customers’ requirements and allowing them to create a more interactive, highly dynamic, and effective sales process. A test run of the actual product, the tool is designed to supplement the product, with the aim of simplifying, streamlining, and accelerating the actions required to set up a POC, as well as making it easily repeatable. All of these are vital when it comes to gaining the upper hand on your market competitors and ensuring your product’s success.

The key factor in all of this is time. The quicker the turnover time, the easier it is for sales reps to iterate and engage with customers. Setting up a blockchain network across multiple physical servers can take hours or even days depending on its scale and complexity. Replacing the same setup with an automation framework that brings up the network in a containerized environment on a single server would cut the time down to a fraction, while also maintaining the integrity and accuracy of the use case.

The flexible nature of such sandboxing tools enables teams to use them for training purposes as well. Think of demonstrating a single POC or use case at scale. The same concerns still stand, more so in this case because more people are involved. Having a tool that can spin up the required scenario quickly and reliably means more people can be trained on that product. More trained people means a larger sales force, resulting in wider customer outreach and, consequently, greater visibility for the business. The perfect chain reaction.

Customized POC Tools provide the added advantage of being inherently virtualized, and hence enable marketing teams to easily demonstrate their solutions online. This is something that is imperative to have, given the way the world works today, and all the more given the ongoing COVID-19 pandemic.

How does Xgrid fit in?

Now that you’re all aboard the hype train, where can one get such an application made for them? One word – Xgrid.

It is essential that sales enablement and POC tools are tailored to the exact requirements of the product. They need to capture and convey the products selling points effectively and efficiently.  That fact is our number one priority when building out these applications. We engage with clients to understand their product and what they want to demonstrate in their bespoke sandboxing environment. Every tool is tailored to support customer-specific use cases and requirements to ensure that the resulting solution is the best fit for you. Our extensive industry experience provides expertise in end-to-end tool design, development, delivery, and support for customers to provide a unique experience in every sales interaction.

Our portfolio includes the development of a tailor-made POC solution for a Fortune 100 company, which enabled them to easily demonstrate their unique SD-WAN solution in customer meetings, in-house training, and global live events. The sandboxing solution allowed their sales team to test out platform releases quickly and bring up client requests and use cases efficiently. 

It has a simple drag-and-drop based user interface (UI), where users can create their network topologies with complete freedom. The deployment process is where all the magic happens. Large, complex configurations consisting of up to 50 network devices spread across multiple physical servers are deployed automatically and managed completely by the application without any user intervention, all thanks to the robust and exhaustive automation framework driving the application. Users can also edit their configurations, tear them down completely and start from scratch, and save existing topologies that the tool can deploy repeatedly, with a few simple clicks on the UI.

Sounds too good to be true, doesn’t it? That’s the beauty of it. It streamlines the entire process to a tee and makes you wonder why you didn’t have a sandbox environment complementing your product in the first place. Sales enablement tools offer incredible versatility and should be a part of every product’s marketing strategy.

Still don’t believe us? Take our community’s word for it. Xgrid was the recipient of the Best in ICT Services award in 2019 from P@SHA (a body consisting of IT companies and industry leaders from across the country) for the Sales Enablement solution we built, which further reiterates the advantages sandboxing solutions have to offer.

Start your journey with Xgrid to experience the wonders of Sales Enablement & Sandboxing for your business. Request a Demo today and let us drive your business drive forward.

Read more

Business Central – A Modern Solution That Integrates Easily

Are you planning to expand your business, but your current technology does not support it? Do you intend to integrate all your current operations, running apps, data, and people into one platform? Microsoft’s Business Central provides unparalleled flexibility with a well-defined path for businesses that are ready to evolve and grow.

Previously known as Dynamics Navision, Business Central is a mid-market, all in one solution with built-in business integration, providing a single comprehensive solution to meet the needs of your growing business. Microsoft Dynamics 365 is an integrated ERP solution that automatically pulls systems and processes together to manage financial sales, services and operations. It also helps in connecting with multiple third-party applications like payroll, CRM, or other industry-specific systems. This management solution helps organizations streamline their processes including fixed assets, orders processing, inventory, human resources, sales & services, project management and manufacturing. In addition to simplifying management, Business Central solution helps users evaluate project performance. Power BI dashboards and charts provide actionable insights to make faster, informed decisions and the ability to accurately forecast the future of your business.  

Microsoft Dynamics enables businesses to develop custom solutions that are easy to configure along with a wide range of functional features. It is the go-to choice for businesses that want minimal configuration, combined with ease-of-use to generally improve business processes and deliver a modern, scalable, and future-proof solution.

What does Business Central bring with itself?

Business Central is fast to implement, easy to configure in product design, development, implementation, and usability. This flexibility allows users to use Business Central with either cloud or on-premise datacenter, depending upon their use case. It offers a great user experience that is consistent across Windows, Android, and iOS devices helping you run your business anywhere. Business Central offers multilanguage support giving the flexibility to view the application in the language of their choice. The availability of multiple languages from all around the world has been assisting businesses. This country-based classification of Business Central provides functionality that has been adapted towards a particular country’s market requirements.

Microsoft data center provides encryption that helps users protect their data from unauthorized access maintaining high standards of security. It also allows users to make informed decisions using connected data to better fulfill the project requirements and reach the optimal level of output. Business Central helps its users to make effective decisions using insights on project’s current status, and resource-usage metrics.

Using MS Dynamics 365 Business Central for App Development at Xgrid

With Dynamics 365 Business Central revolutionizing the ways of doing businesses, team Xgrid adapted this solution for managing multiple jobs and different tasks assignments to enhance the functionality of the construction application for a notable customer. The purpose of this app is to provide a central control to the manager while assigning tasks to the on-site team. It provides a full visibility of on-site tasks and helps in the better management of operations. The solution has a desktop version coupled with an iPad application, to lubricate field task management, resource allocation and bridge the communication gap between the off-field managers and on-site employees effectively. 

These applications along with a legacy SQL server based database use an enhanced business management capability. For the dynamic management of offsite projects, we used Microsoft Dynamics Business Central (on-premises) version to extract and store the data from web applications and legacy databases. The solution leveraged the dynamic capabilities of web services to offer data-rich, cognitive services.

Business Central provides flexibility to customize the applications which allowed us to extend the existing functionality of Job Management module in the construction application. This led us to get the on-premise functionality with robust ERP functionality across the job management module for the construction app. We developed our extension on top of the application to meet our client’s business needs. Our developed extension connected the Business Central web services to the API layer of our solution. This acted as a middleware between the database and the front-ends of the application. Through Dynamics Business Central Web Services, the application users can view real-time data that helps them in dynamic management of their off-site projects. All workers, contractors, and suppliers are now able to perform their tasks keeping each crew member informed and updated all the time.

The second module was to schedule periodic data processing by configuring Job Queues in Business Central. These jobs continuously synced data to and from legacy databases. One of key usage of Business Central in the complete solution was to bridge legacy solutions with a range of mobile devices like iPhone, iPad and Windows client for day to day user activities. 

Dynamics Business Central helps businesses to streamline their application processes and present logical data as per user needs for consumption by the Desktop and Mobile applications. The solution also employed the resource module and enhanced the functionality to provide users with the requested material for a specific job. This further enabled the procurement department to generate purchase orders based on requested materials from different job locations. The ability to have data in real time helps the decision makers to make timely decisions without a need for data migration to transition to this robust platform.

Read more

Evolution of Full Stack Application Development

Software development cultures and practices evolve with the developers’ habitual learning and self-improvement. For a software engineer, […]

Read more