Showing posts with label SonarQube. Show all posts
Showing posts with label SonarQube. Show all posts

Monday, 1 July 2024

How to Reboot RedHat Server 9 using Jenkins and Ansible Playbooks

 In order for you to automate the process of Reboot RedHat Servers with Jenkins and Ansible Playbooks.

You have to create a Jenkins pipeline for an application that involves the following steps: rebooting a Red Hat Server 9 using Ansible Playbooks, building, testing, scanning with SonarQube and Fortify, and storing the artifact in JFrog Artifactory. 

Below are the detailed steps along with the required Jenkinsfile, Ansible playbook, and information on the necessary credentials.

Prerequisites

  1. Jenkins Setup:

    • Jenkins should be installed and configured.
    • Plugins: Ansible, SonarQube Scanner, Fortify, JFrog Artifactory.
  2. Credentials:

    • Ansible: SSH Key for Red Hat Server.
    • SonarQube: API Token.
    • Fortify: API Token.
    • JFrog Artifactory: Username and API Key.
  3. Tools:

    • Ansible installed and configured on Jenkins.
    • SonarQube and Fortify servers accessible from Jenkins.
    • JFrog Artifactory accessible from Jenkins.

Jenkins Pipeline Script (Jenkinsfile)

groovy
pipeline { agent any environment { ANSIBLE_PLAYBOOK = 'reboot-server.yml' ANSIBLE_INVENTORY = 'hosts' SONARQUBE_SERVER = 'sonarqube.example.com' SONARQUBE_TOKEN = credentials('sonarqube-token') FORTIFY_SERVER = 'fortify.example.com' FORTIFY_TOKEN = credentials('fortify-token') ARTIFACTORY_SERVER = 'artifactory.example.com' ARTIFACTORY_CREDENTIALS = credentials('artifactory-credentials') } stages { stage('Reboot Server') { steps { script { ansiblePlaybook( playbook: "${ANSIBLE_PLAYBOOK}", inventory: "${ANSIBLE_INVENTORY}", extras: "--become --extra-vars '
ansible_become_pass=${env.ANSIBLE_SSH_PASS}'" ) } } } stage('Build') { steps { // Replace with your build steps, e.g., Maven, Gradle echo 'Building the application...' } } stage('Test') { steps { // Replace with your testing steps echo 'Running tests...' } } stage('SonarQube Scan') { environment { SONARQUBE_URL = "${SONARQUBE_SERVER}" } steps { withSonarQubeEnv('SonarQube') { sh 'sonar-scanner -Dsonar.projectKey=my_project
-Dsonar.sources=. -Dsonar.host.url=${SONARQUBE_URL}
-Dsonar.login=${SONARQUBE_TOKEN}' } } } stage('Fortify Scan') { steps { script { // Assuming Fortify command-line tools are
installed and configured sh "sourceanalyzer -b my_project -scan -f
my_project.fpr -url ${FORTIFY_SERVER} -token ${FORTIFY_TOKEN}" } } } stage('Artifact Storage') { steps { script { // Replace with your artifact storage steps sh "curl -u ${ARTIFACTORY_CREDENTIALS} -T
./path/to/your/artifact.ext https://${ARTIFACTORY_SERVER}/
artifactory/path/to/repo/" } } } } post { always { cleanWs() } } }

Ansible Playbook (reboot-server.yml)

yaml
--- - name: Reboot Red Hat Server hosts: all become: yes tasks: - name: Reboot the server ansible.builtin.reboot: reboot_timeout: 300

Inventory File (hosts)

css
[all] redhat-server-1 ansible_host=your.server.ip ansible_user=your_ssh_user
ansible_ssh_private_key_file=/path/to/ssh_key

Adding Credentials in Jenkins

  1. Ansible SSH Key:

    • Go to Jenkins Dashboard > Credentials > System > Global credentials (unrestricted).
    • Add a new credential of type "SSH Username with private key".
    • Add your SSH key file for the Red Hat Server.
  2. SonarQube Token:

    • Go to Jenkins Dashboard > Credentials > System > Global credentials (unrestricted).
    • Add a new credential of type "Secret text".
    • Enter your SonarQube API token.
  3. Fortify Token:

    • Repeat the same steps as for the SonarQube Token, but use your Fortify API token.
  4. JFrog Artifactory Credentials:

    • Add a new credential of type "Username with password".
    • Enter your Artifactory username and API key.

Summary

This Jenkins pipeline script is designed to:

  1. Reboot a Red Hat Server 9 using Ansible.
  2. Build the application (customise the build steps according to your project).
  3. Run tests (customise the test steps according to your project).
  4. Perform a SonarQube scan for code quality analysis.
  5. Perform a Fortify scan for security analysis.
  6. Upload the artifact to JFrog Artifactory.

Make sure to replace placeholder steps with your actual build and test commands, and ensure that your Jenkins environment is configured correctly with the necessary tools and credentials.

Friday, 1 March 2024

Resolving Pre-Configuration Issues for Sonar with Elasticsearch and Tuning 'vm.max_map_count'


In large-scale deployments, integrating SonarQube (Sonar) with an Elasticsearch stack for code analysis can lead to configuration challenges. A common hurdle DevOps Engineers encounter is the 'vm.max_map_count' setting on the Elasticsearch nodes. This article delves into understanding why this setting is crucial, how to resolve pre-configuration issues, and the steps to adjust it for optimal performance.

Why 'vm.max_map_count' Matters

  • Elasticsearch Memory Mapping: Elasticsearch heavily relies on virtual memory mapping for its indexing and search operations. The 'vm.max_map_count' kernel setting on Linux systems limits the maximum number of virtual memory areas a process can have.
  • SonarQube and Indexing: When Sonar analyzes large codebases, it sends a significant amount of data to Elasticsearch for indexing. If the 'vm.max_map_count' value is too low, Elasticsearch may run out of available virtual memory areas, leading to errors and instability.

Pre-Configuration Checks

  1. Baseline: Before modifying the setting, check the current value on your Elasticsearch nodes:

    Bash
    sysctl -a | grep vm.max_map_count
    
  2. SonarQube Recommendations: Refer to the official SonarQube documentation for recommended 'vm.max_map_count' settings based on your deployment size and expected project load.

Configuring 'vm.max_map_count'

  1. Temporary Adjustment: To temporarily change the setting for the current session:

    Bash
    sudo sysctl -w vm.max_map_count=262144  # Example value
    
  2. Permanent Change: To persistently modify the setting, edit the /etc/sysctl.conf file:

    Bash
    sudo nano /etc/sysctl.conf 
    

    Add the following line:

    vm.max_map_count = 262144  # Adjust the value as needed
    

    Save the file and apply the changes:

    Bash
    sudo sysctl -p
    

Additional Considerations

  • Heap Size: Ensure your Elasticsearch nodes have sufficient memory allocated to the heap (consult SonarQube documentation for recommendations). Increasing 'vm.max_map_count' without adequate memory can lead to other performance issues.
  • Monitoring: After making the changes, closely monitor Elasticsearch and SonarQube performance. Look for errors related to memory mapping or out-of-memory exceptions.
  • Alternative File Storage: For very large-scale deployments, investigate alternative file storage options for Elasticsearch that may be less reliant on memory mapping.

Important Notes:

  • The appropriate value for 'vm.max_map_count' will depend on your specific deployment. Start with the SonarQube recommendations and adjust as needed.
  • Thoroughly test any configuration changes in a staging environment before applying them to production.

Let me know if you'd like a more tailored guide with specific values based on your deployment scale and SonarQube version!

How to check for open ports on Linux

Checking for open ports is among the first steps to secure your device. Listening services may be the entrance for attackers who may exploit...