Creating a Structured Pipeline to sync from Production to Staging through Jenkins

At Exemplifi, we tailor our hosting architectures to match the specific requirements and traffic levels of our clients' websites. For websites experiencing light to medium traffic, we implement a straightforward architecture. This includes a pair of EC2 instances, with one dedicated to the database and the other handling application files. Additionally, we utilize an S3 bucket for media file storage and CloudFront for effective caching.

For websites that attract large volumes of traffic and demand higher performance, we enhance this architecture. We add a load balancer, incorporate Autoscaling for dynamic resource management, and integrate AWS WAF for enhanced security.

Moreover, each of our websites is equipped with a staging site. These staging sites are hosted on individual EC2 instances and are managed efficiently using Docker and Portainer, a widely-used open source tool for Docker orchestration. This setup allows us to test and refine website features in a controlled environment before deploying them live.

Introduction

Maintaining staging sites within our AWS infrastructure primarily serves to minimize the risks associated with directly implementing updates to plugins, themes, or core elements on the production website. 

Whenever there are changes to be made or new features to be added to a website, we first roll these out in the staging environment. This allows us to thoroughly test and refine these updates before deploying them to the live production environment.

Risks of  updating plugins, themes, and the WordPress core

Updating plugins, themes, and the WordPress core directly on a production environment can be risky, as it may lead to compatibility issues, website downtime, and security vulnerabilities. These updates, if not tested properly, can break website functionality, cause data loss, and negatively affect user experience and search engine rankings. Implementing such changes without testing could also require a complicated and resource-intensive rollback process if something goes wrong.

By contrast, making these updates in a staging environment first allows for safe testing and debugging. This approach ensures compatibility and functionality before any changes are made live, significantly reducing the risk of disrupting the live website. It preserves user trust and ensures a smooth, continuous online presence, which is crucial for maintaining website integrity and performance.

Importance of establishing a pipeline

This pipeline plays a pivotal role in streamlining the migration of content and the database from the production to the staging environment. 

It alleviates the burdensome task of transferring files and the database. Moreover, it is essential that the staging site mirrors the production environment precisely during updates, to prevent any discrepancies in configuration.

How it works

This pipeline is structured into six distinct stages, each executed through a bash script dedicated to a specific function. The stages are as follows:

Stage 1: Content synchronization from production to staging is achieved using the Linux rsync command. This excludes media files, as they are hosted on an S3 bucket.

Stage 2:  A backup of the database is created on the production server.

Stage 3: This database backup is then transferred to the staging server.

Stage 4:  The current database on the staging server is dropped.

Stage 5: The backup is imported to restore the database on the staging server.

Stage 6: Finally, the URLs in the database are updated to reflect the staging environment's URL (Only for WordPress sites.)

Need of a Elastic File System for storing and syncing updated files

We employ an Elastic File System (EFS) to maintain updated files from the production environment for synchronization with the GitHub repository. To mitigate vulnerability concerns and prevent exposure of file and folder metadata, we avoid storing a git folder on both our production and staging servers. Additionally, a straightforward shell script is utilized to synchronize content from the staging server to the Elastic File System. 

This process, facilitated by a basic bash script, ensures the syncing of content, including core files, back to the GitHub repository.

Streamlined Maintenance Workflow

This pipeline streamlines the content synchronization from production to staging, completing the task in just a few minutes. This efficiency eliminates the cumbersome and manual effort previously required in the process.

Streamlined workflow for monthly maintenance, from syncing the content and database from production to staging and performing plugins, themes and core updates on the staging environment

This pipeline has been instrumental in simplifying and securing the update process directly on the production end. It enables one-click synchronization of content and database from production to staging. All updates are then carried out on the staging server, where the site is thoroughly tested for any bugs. Once everything is confirmed to be working correctly, the updates are then pushed to the production environment.

Once updates are executed on the production server, we synchronize the content from the staging server to the Elastic File System. Subsequently, a bash script is employed to ensure that the GitHub repository is consistently updated with the changes made during the update process.

Conclusion

In summary, the entire process has been streamlined to enable the Development and DevOps teams to carry out updates on the production environment smoothly, without encountering any issues or bugs. Significant code changes or the addition of new features are first implemented on the staging environment and then seamlessly transitioned to the production end.

If you liked this insight please join us on LinkedIn, X and Facebook for more updates

Related Insights

Subscribe to our newsletter