.zip release (or install git) 68
-
69
-3. Unzip the folder and enter it
70
-
71
-4. Open the .env file and change the line with "OUTER_HOSTNAME" to contain your IP:
72
-
73
-```bash
74
-OUTER_HOSTNAME=YOUR.IP.HERE
75
-```
76
-
77
-5. Run docker compose
78
-```bash
79
-docker compose up -d
80
-```
81
-
82
-### Configurations (high availability, scale, proxies, default users etc.)
83
-https://shuffler.io/docs/configuration
84
-
85
-![architecture](https://github.com/frikky/Shuffle/raw/main/frontend/src/assets/img/shuffle_architecture.png)
86
-
87
-### After installation 
88
-1. After installation, go to http://localhost:3001 (or your servername - https is on port 3443)
89
-2. Now set up your admin account (username & password). Shuffle doesn't have a default username and password. 
90
-3. Sign in with the same Username & Password! Go to /apps and see if you have any apps yet. If not - you may need to [configure proxies](https://shuffler.io/docs/configuration#production_readiness)
91
-4. Check out https://shuffler.io/docs/configuration as it has a lot of useful information to get started
92
-
93
-![Admin account setup](https://github.com/Shuffle/Shuffle/blob/main/frontend/src/assets/img/shuffle_adminaccount.png?raw=true)
94
-
95
-### Useful info
96
-* Check out [getting started](https://shuffler.io/docs/getting_started)
97
-* The default state of Shuffle is NOT scalable. See [production setup](https://shuffler.io/docs/configuration#production_readiness) for more info
98
-* The server is available on http://localhost:3001 (or your servername)
99
-* Further configurations can be done in docker-compose.yml and .env.
100
-* Default database location is in the same folder: ./shuffle-database
101
-
102
-# Local development installation
103
-
104
-Local development is pretty straight forward with **ReactJS** and **Golang**. This part is intended to help you run the code for development purposes. We recommend having Shuffle running with the Docker-compose, then manually running the portion that you want to test and/or edit.
105
-
106
-**PS: You have to stop the Backend Docker container to get this one working**
107
-
108
-**PPS: Use the "main" branch when developing to get it set up easier**
109
-
110
-## Frontend - ReactJS /w cytoscape
111
-http://localhost:3000 - Requires [npm](https://nodejs.org/en/download/)/[yarn](https://yarnpkg.com/lang/en/docs/install/#debian-stable)/your preferred manager. Runs independently from backend.
112
-```bash
113
-cd frontend
114
-yarn install
115
-yarn start
116
-```
117
-
118
-## Backend - Golang
119
-http://localhost:5001 - REST API - requires [>=go1.13](https://golang.org/dl/)
120
-```bash
121
-export SHUFFLE_OPENSEARCH_URL="https://localhost:9200"
122
-export SHUFFLE_ELASTIC=true
123
-export SHUFFLE_OPENSEARCH_USERNAME=admin
124
-export SHUFFLE_OPENSEARCH_PASSWORD=StrongShufflePassword321!
125
-export SHUFFLE_OPENSEARCH_SKIPSSL_VERIFY=true
126
-cd backend/go-app
127
-go run main.go walkoff.go docker.go
128
-```
129
-**WINDOWS USERS:** Follow [this guide](https://www.wikihow.com/Create-an-Environment-Variable-in-Windows-10) to add environment variables in your machine.
130
-
131
-Large portions of the backend is written in another repository - [shuffle-shared](https://github.com/frikky/shuffle-shared). If you want to update any of this code and test in realtime, we recommend following these steps:
132
-1. Clone shuffle-shared to a local repository
133
-2. Open the Shuffle backend's go.mod file (./shuffle/backend/go.mod)  (**NOT** in shuffle-shared)
134
-3. Change the following line to point to your directory AFTER the =>
135
-```
136
-//replace github.com/frikky/shuffle-shared => ../../shuffle-shared
137
-```
138
-4. Make the changes you want, then restart the backend server!
139
-5. With your changes made, make a pull request :fire:
140
-
141
-## Database - Opensearch 
142
-Make sure this is running through the docker-compose, and that the backend points to it with SHUFFLE_OPENSEARCH_URL defined.
143
-
144
-What it means:
145
-1. Make sure you have docker compose installed
146
-2. Make sure you have the docker-compose.yml file from this repository
147
-3. Run `docker-compose up opensearch -d`
148
-
149
-## Orborus
150
-Execution of Workflows:
151
-PS: This requires some specific environment variables
152
-```
153
-cd functions/onprem/orborus
154
-go run orborus.go
155
-```
156
-
157
-Environments (modify for Windows):
158
-```
159
-export ORG_ID=Shuffle
160
-export ENVIRONMENT_NAME=Shuffle
161
-export BASE_URL=http://YOUR-IP:5001
162
-export DOCKER_API_VERSION=1.40
163
-```
164
-
165
-AND THAT's it - hopefully it worked. If it didn't please email [frikky@shuffler.io](mailto:frikky@shuffler.io)

+ 0 - 86
Shuffle/.github/push_nightly.sh

@@ -1,86 +0,0 @@
1
-# This can be done in the dockerpush workflow itself
2
-# Done manually for now since GHCR isn't being pushed to easily with the current Github action CI. Nightly = Latest IF we run hotfixes on latest
3
-
4
-### Pull latest from ghcr CI/CD
5
-docker pull ghcr.io/shuffle/shuffle-app_sdk:nightly
6
-docker pull ghcr.io/shuffle/shuffle-worker:nightly
7
-docker pull ghcr.io/shuffle/shuffle-orborus:nightly
8
-docker pull ghcr.io/shuffle/shuffle-frontend:nightly
9
-#docker pull ghcr.io/shuffle/shuffle-backend:nightly
10
-#
11
-### NIGHTLY releases
12
-docker tag ghcr.io/shuffle/shuffle-app_sdk:nightly ghcr.io/frikky/shuffle-app_sdk:nightly
13
-docker tag ghcr.io/shuffle/shuffle-worker:nightly  ghcr.io/frikky/shuffle-worker:nightly
14
-docker tag ghcr.io/shuffle/shuffle-orborus:nightly ghcr.io/frikky/shuffle-orborus:nightly
15
-docker tag ghcr.io/shuffle/shuffle-frontend:nightly ghcr.io/frikky/shuffle-frontend:nightly
16
-docker tag ghcr.io/shuffle/shuffle-backend:nightly ghcr.io/frikky/shuffle-backend:nightly
17
-
18
-docker push ghcr.io/frikky/shuffle-app_sdk:nightly
19
-docker push ghcr.io/frikky/shuffle-worker:nightly
20
-docker push ghcr.io/frikky/shuffle-orborus:nightly
21
-docker push ghcr.io/frikky/shuffle-frontend:nightly
22
-docker push ghcr.io/frikky/shuffle-backend:nightly
23
-
24
-### LATEST releases:
25
-## shuffle/shuffle 
26
-docker tag ghcr.io/shuffle/shuffle-app_sdk:nightly ghcr.io/shuffle/shuffle-app_sdk:latest
27
-docker tag ghcr.io/shuffle/shuffle-worker:nightly  ghcr.io/shuffle/shuffle-worker:latest
28
-docker tag ghcr.io/shuffle/shuffle-orborus:nightly ghcr.io/shuffle/shuffle-orborus:latest
29
-docker tag ghcr.io/shuffle/shuffle-frontend:nightly ghcr.io/shuffle/shuffle-frontend:latest
30
-docker tag ghcr.io/shuffle/shuffle-backend:nightly ghcr.io/shuffle/shuffle-backend:latest
31
-
32
-docker push ghcr.io/shuffle/shuffle-app_sdk:latest
33
-docker push ghcr.io/shuffle/shuffle-worker:latest
34
-docker push ghcr.io/shuffle/shuffle-orborus:latest
35
-docker push ghcr.io/shuffle/shuffle-frontend:latest
36
-docker push ghcr.io/shuffle/shuffle-backend:latest
37
-
38
-## frikky/shuffle
39
-docker tag ghcr.io/shuffle/shuffle-app_sdk:nightly ghcr.io/frikky/shuffle-app_sdk:latest
40
-docker tag ghcr.io/shuffle/shuffle-worker:nightly  ghcr.io/frikky/shuffle-worker:latest
41
-docker tag ghcr.io/shuffle/shuffle-orborus:nightly ghcr.io/frikky/shuffle-orborus:latest
42
-docker tag ghcr.io/shuffle/shuffle-frontend:nightly ghcr.io/frikky/shuffle-frontend:latest
43
-docker tag ghcr.io/shuffle/shuffle-backend:nightly ghcr.io/frikky/shuffle-backend:latest
44
-
45
-docker push ghcr.io/frikky/shuffle-app_sdk:latest
46
-docker push ghcr.io/frikky/shuffle-worker:latest
47
-docker push ghcr.io/frikky/shuffle-orborus:latest
48
-docker push ghcr.io/frikky/shuffle-frontend:latest
49
-docker push ghcr.io/frikky/shuffle-backend:latest
50
-
51
-
52
-### 1.1.0 releases:
53
-## shuffle/shuffle
54
-docker tag ghcr.io/shuffle/shuffle-app_sdk:nightly ghcr.io/shuffle/shuffle-app_sdk:1.1.0
55
-docker tag ghcr.io/shuffle/shuffle-worker:nightly  ghcr.io/shuffle/shuffle-worker:1.1.0
56
-docker tag ghcr.io/shuffle/shuffle-orborus:nightly ghcr.io/shuffle/shuffle-orborus:1.1.0
57
-docker tag ghcr.io/shuffle/shuffle-frontend:nightly ghcr.io/shuffle/shuffle-frontend:1.1.0
58
-docker tag ghcr.io/shuffle/shuffle-backend:nightly ghcr.io/shuffle/shuffle-backend:1.1.0
59
-
60
-docker push ghcr.io/shuffle/shuffle-app_sdk:1.1.0
61
-docker push ghcr.io/shuffle/shuffle-worker:1.1.0
62
-docker push ghcr.io/shuffle/shuffle-orborus:1.1.0
63
-docker push ghcr.io/shuffle/shuffle-frontend:1.1.0
64
-docker push ghcr.io/shuffle/shuffle-backend:1.1.0
65
-
66
-## frikky/shuffle
67
-docker tag ghcr.io/shuffle/shuffle-app_sdk:nightly 	ghcr.io/frikky/shuffle-app_sdk:1.1.0
68
-docker tag ghcr.io/shuffle/shuffle-worker:nightly  	ghcr.io/frikky/shuffle-worker:1.1.0
69
-docker tag ghcr.io/shuffle/shuffle-orborus:nightly 	ghcr.io/frikky/shuffle-orborus:1.1.0
70
-docker tag ghcr.io/shuffle/shuffle-frontend:nightly ghcr.io/frikky/shuffle-frontend:1.1.0
71
-docker tag ghcr.io/shuffle/shuffle-backend:nightly 	ghcr.io/frikky/shuffle-backend:1.1.0
72
-
73
-docker push ghcr.io/frikky/shuffle-app_sdk:1.1.0
74
-docker push ghcr.io/frikky/shuffle-worker:1.1.0
75
-docker push ghcr.io/frikky/shuffle-orborus:1.1.0
76
-docker push ghcr.io/frikky/shuffle-frontend:1.1.0
77
-docker push ghcr.io/frikky/shuffle-backend:1.1.0
78
-
79
-### Manage worker-scale upload (Requires auth)
80
-# This is supposed to be unavailable, and only be downloadable by customers 
81
-docker pull ghcr.io/shuffle/shuffle-worker-scale:latest
82
-docker save ghcr.io/shuffle/shuffle-worker-scale:latest -o shuffle-worker.zip
83
-echo "1. Upload shuffle-worker.zip to the shuffler.io public repo. If in Github Dev env, download the file, and upload manually."
84
-echo "2. Have customers download it with: $ wget URL"
85
-echo "3. Have customers use with with: docker load shuffle-worker.zip"
86
-

+ 0 - 71
Shuffle/.github/workflows/codeql-analysis.yml

@@ -1,71 +0,0 @@
1
-# For most projects, this workflow file will not need changing; you simply need
2
-# to commit it to your repository.
3
-#
4
-# You may wish to alter this file to override the set of languages analyzed,
5
-# or to provide custom queries or build logic.
6
-#
7
-# ******** NOTE ********
8
-# We have attempted to detect the languages in your repository. Please check
9
-# the `language` matrix defined below to confirm you have the correct set of
10
-# supported CodeQL languages.
11
-#
12
-name: "CodeQL"
13
-
14
-on:
15
-  push:
16
-    branches: 
17
-      - master
18
-      - launch 
19
-  pull_request:
20
-    # The branches below must be a subset of the branches above
21
-    branches: 
22
-      - master
23
-      - launch 
24
-  schedule:
25
-    - cron: '38 16 * * 4'
26
-
27
-jobs:
28
-  analyze:
29
-    name: Analyze
30
-    runs-on: ubuntu-latest
31
-
32
-    strategy:
33
-      fail-fast: false
34
-      matrix:
35
-        language: [ 'go', 'javascript', 'python' ]
36
-        # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
37
-        # Learn more:
38
-        # https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
39
-
40
-    steps:
41
-    - name: Checkout repository
42
-      uses: actions/checkout@v3
43
-
44
-    # Initializes the CodeQL tools for scanning.
45
-    - name: Initialize CodeQL
46
-      uses: github/codeql-action/init@v2
47
-      with:
48
-        languages: ${{ matrix.language }}
49
-        # If you wish to specify custom queries, you can do so here or in a config file.
50
-        # By default, queries listed here will override any specified in a config file.
51
-        # Prefix the list here with "+" to use these queries and those in the config file.
52
-        # queries: ./path/to/local/query, your-org/your-repo/queries@main
53
-
54
-    # Autobuild attempts to build any compiled languages  (C/C++, C#, or Java).
55
-    # If this step fails, then you should remove it and run the build manually (see below)
56
-    - name: Autobuild
57
-      uses: github/codeql-action/autobuild@v2
58
-
59
-    # ℹ️ Command-line programs to run using the OS shell.
60
-    # 📚 https://git.io/JvXDl
61
-
62
-    # ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
63
-    #    and modify them (or add more) to build your code if your project
64
-    #    uses a compiled language
65
-
66
-    #- run: |
67
-    #   make bootstrap
68
-    #   make release
69
-
70
-    - name: Perform CodeQL Analysis
71
-      uses: github/codeql-action/analyze@v2

+ 0 - 82
Shuffle/.github/workflows/dockerbuild-nightly.yaml

@@ -1,82 +0,0 @@
1
-name: nightly-dockerbuild
2
-
3
-on:
4
-  workflow_dispatch:
5
-  push:
6
-    branches:
7
-      - nightly
8
-    paths:
9
-      - "**"
10
-      - "!.github/**"
11
-      - "!**.md"
12
-      - "!docker-compose.yml"
13
-jobs:
14
-  main:
15
-    runs-on: ubuntu-latest
16
-    continue-on-error: ${{ matrix.experimental }}
17
-    strategy:
18
-      fail-fast: false
19
-      matrix:
20
-        include:
21
-          - app: frontend
22
-            path: frontend
23
-            version: nightly
24
-            experimental: true
25
-          - app: backend
26
-            path: backend
27
-            version: nightly
28
-            experimental: true
29
-          - app: orborus
30
-            path: functions/onprem/orborus
31
-            version: nightly
32
-            experimental: true
33
-          - app: worker
34
-            path: functions/onprem/worker
35
-            version: nightly
36
-            experimental: true
37
-    steps:
38
-      - name: Checkout
39
-        uses: actions/checkout@v3
40
-
41
-      - name: Set up Docker Buildx
42
-        uses: docker/setup-buildx-action@v3
43
-
44
-      - name: Set up QEMU
45
-        uses: docker/setup-qemu-action@v3
46
-        with:
47
-          platforms: "amd64,arm64,arm"
48
-
49
-      - name: Login to DockerHub
50
-        uses: docker/login-action@v3
51
-        with:
52
-          username: ${{ secrets.DOCKERHUB_USERNAME }}
53
-          password: ${{ secrets.DOCKERHUB_TOKEN }}
54
-
55
-      - name: Login to Ghcr
56
-        uses: docker/login-action@v3
57
-        with:
58
-          registry: ghcr.io
59
-          username: ${{ github.actor }}
60
-          password: ${{ secrets.GITHUB_TOKEN }}
61
-
62
-      - name: Ghcr Build and push
63
-        id: docker_build
64
-        uses: docker/build-push-action@v4
65
-        env:
66
-          BUILDX_NO_DEFAULT_LOAD: true
67
-        with:
68
-          logout: false
69
-          context: ${{ matrix.path }}/
70
-          file: ${{ matrix.path }}/Dockerfile
71
-          platforms: linux/amd64,linux/arm64
72
-          push: true
73
-          cache-from: type=local,src=/tmp/.buildx-cache
74
-          cache-to: type=local,dest=/tmp/.buildx-cache
75
-          tags: |
76
-            ghcr.io/shuffle/shuffle-${{ matrix.app }}:${{ matrix.version }}
77
-            ${{ secrets.DOCKERHUB_USERNAME }}/shuffle-${{ matrix.app }}:${{ matrix.version }}
78
-            frikky/shuffle-${{ matrix.app }}:${{ matrix.version }}
79
-            frikky/shuffle:${{ matrix.app }}
80
-
81
-      - name: Image digest
82
-        run: echo ${{ steps.docker_build.outputs.digest }}

+ 0 - 83
Shuffle/.github/workflows/dockerbuild.yaml

@@ -1,83 +0,0 @@
1
-name: dockerbuild
2
-
3
-on:
4
-  workflow_dispatch:
5
-  push:
6
-    branches:
7
-      - main
8
-    paths:
9
-      - "**"
10
-      - "!.github/**"
11
-      - "!**.md"
12
-      - "!docker-compose.yml"
13
-jobs:
14
-  main:
15
-    runs-on: ubuntu-latest
16
-    continue-on-error: ${{ matrix.experimental }}
17
-    strategy:
18
-      fail-fast: false
19
-      matrix:
20
-        include:
21
-          - app: frontend
22
-            path: frontend
23
-            version: 2.2.0
24
-            experimental: true
25
-          - app: backend
26
-            path: backend
27
-            version: 2.2.0
28
-            experimental: true
29
-          - app: orborus
30
-            path: functions/onprem/orborus
31
-            version: 2.2.0
32
-            experimental: true
33
-          - app: worker
34
-            path: functions/onprem/worker
35
-            version: 2.2.0
36
-            experimental: true
37
-    steps:
38
-      - name: Checkout
39
-        uses: actions/checkout@v3
40
-
41
-      - name: Set up Docker Buildx
42
-        uses: docker/setup-buildx-action@v3
43
-
44
-      - name: Set up QEMU
45
-        uses: docker/setup-qemu-action@v3
46
-        with:
47
-          platforms: "amd64,arm64,arm"
48
-
49
-      - name: Login to DockerHub
50
-        uses: docker/login-action@v3
51
-        with:
52
-          username: ${{ secrets.DOCKERHUB_USERNAME }}
53
-          password: ${{ secrets.DOCKERHUB_TOKEN }}
54
-
55
-      - name: Login to Ghcr
56
-        uses: docker/login-action@v3
57
-        with:
58
-          registry: ghcr.io
59
-          username: ${{ github.actor }}
60
-          password: ${{ secrets.GITHUB_TOKEN }}
61
-
62
-      - name: Ghcr Build and push
63
-        id: docker_build
64
-        uses: docker/build-push-action@v4
65
-        env:
66
-          BUILDX_NO_DEFAULT_LOAD: true
67
-        with:
68
-          logout: false
69
-          context: ${{ matrix.path }}/
70
-          file: ${{ matrix.path }}/Dockerfile
71
-          platforms: linux/amd64,linux/arm64
72
-          push: true
73
-          cache-from: type=local,src=/tmp/.buildx-cache
74
-          cache-to: type=local,dest=/tmp/.buildx-cache
75
-          tags: |
76
-            ghcr.io/shuffle/shuffle-${{ matrix.app }}:${{ matrix.version }}
77
-            ghcr.io/shuffle/shuffle-${{ matrix.app }}:latest
78
-            ${{ secrets.DOCKERHUB_USERNAME }}/shuffle-${{ matrix.app }}:${{ matrix.version }}
79
-            frikky/shuffle-${{ matrix.app }}:${{ matrix.version }}
80
-            frikky/shuffle:${{ matrix.app }}
81
-
82
-      - name: Image digest
83
-        run: echo ${{ steps.docker_build.outputs.digest }}

+ 0 - 77
Shuffle/.github/workflows/helm-release.yml

@@ -1,77 +0,0 @@
1
-name: Helm Release
2
-
3
-on:
4
-  release:
5
-    types: [published]
6
-    branches:
7
-      - main
8
-      - nightly
9
-  push:
10
-    branches:
11
-      - nightly
12
-      - main
13
-    paths:
14
-      - "functions/kubernetes/charts/**"
15
-  workflow_dispatch:
16
-
17
-permissions:
18
-  contents: read
19
-  packages: write
20
-
21
-jobs:
22
-  main:
23
-    runs-on: ubuntu-latest
24
-    steps:
25
-      - name: Checkout
26
-        uses: actions/checkout@v4
27
-
28
-      - name: Setup Helm
29
-        uses: azure/setup-helm@v4
30
-        with:
31
-          version: v3.18.4 # I run it locally so I know it works :b
32
-
33
-      - name: Verify Helm
34
-        run: helm version
35
-
36
-      - name: Set versions
37
-        run: |
38
-          if [[ ${{ github.event_name }} == 'release' ]]; then
39
-            TAG_NAME="${{ github.event.release.tag_name }}"
40
-
41
-            # Remove the v prefix
42
-            VERSION=${TAG_NAME#v}
43
-
44
-            APP_VERSION="${VERSION}"
45
-            CHART_VERSION="${VERSION}"
46
-          else
47
-            APP_VERSION="latest"
48
-            CHART_VERSION="2.2.0"
49
-          fi
50
-
51
-          echo "APP_VERSION set to ${APP_VERSION}"
52
-          echo "CHART_VERSION set to ${CHART_VERSION}. Validating..."
53
-
54
-          # https://semver.org/#is-there-a-suggested-regular-expression-regex-to-check-a-semver-string
55
-          SEMVER_REGEX="^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+([0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$"
56
-
57
-          if echo "${CHART_VERSION}" | grep -Pq "${SEMVER_REGEX}"; then
58
-            echo "${CHART_VERSION} is a valid SemVer string";
59
-          else
60
-            echo "${CHART_VERSION} is an invalid SemVer string";
61
-            exit 1;
62
-          fi
63
-
64
-          echo "CHART_VERSION=${CHART_VERSION}" >> "$GITHUB_ENV"
65
-          echo "APP_VERSION=${APP_VERSION}" >> "$GITHUB_ENV"
66
-
67
-      - name: Update helm dependencies
68
-        run: helm dependency update ./functions/kubernetes/charts/shuffle
69
-
70
-      - name: Package Helm chart
71
-        run: helm package ./functions/kubernetes/charts/shuffle --version "${CHART_VERSION}" --app-version="${APP_VERSION}" --destination ./functions/kubernetes/charts
72
-
73
-      - name: Login to OCI registry (ghcr.io)
74
-        run: helm registry login ghcr.io --username ${{ github.actor }} --password ${{ secrets.GITHUB_TOKEN }}
75
-
76
-      - name: Push helm chart
77
-        run: helm push ./functions/kubernetes/charts/shuffle-*.tgz oci://ghcr.io/shuffle/charts

+ 0 - 31
Shuffle/.github/workflows/helm-test.yml

@@ -1,31 +0,0 @@
1
-name: Helm Test
2
-
3
-on:
4
-  push:
5
-    paths:
6
-      - "functions/kubernetes/charts/**"
7
-  workflow_dispatch:
8
-
9
-permissions:
10
-  contents: read
11
-
12
-jobs:
13
-  main:
14
-    runs-on: ubuntu-latest
15
-    steps:
16
-      - name: Checkout
17
-        uses: actions/checkout@v4
18
-
19
-      - name: Install apt dependencies
20
-        run: |
21
-          sudo apt-get install apt-transport-https -y --no-install-recommends
22
-          curl -fsSL https://packages.buildkite.com/helm-linux/helm-debian/gpgkey | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
23
-          echo "deb [signed-by=/usr/share/keyrings/helm.gpg] https://packages.buildkite.com/helm-linux/helm-debian/any/ any main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
24
-          sudo apt-get update
25
-          sudo apt-get install helm -y --no-install-recommends
26
-
27
-      - name: Update helm dependencies
28
-        run: helm dependency update ./functions/kubernetes/charts/shuffle
29
-
30
-      - name: Template helm chart using default values
31
-        run: helm template shuffle ./functions/kubernetes/charts/shuffle --debug --values ./functions/kubernetes/charts/shuffle/values.yaml

+ 0 - 174
Shuffle/.github/workflows/quick-testing.yml

@@ -1,174 +0,0 @@
1
-name: Docker Compose Up and Ping
2
-
3
-on:
4
-  workflow_run:
5
-    workflows: ["dockerbuild"]
6
-    types:
7
-      - completed
8
-  workflow_dispatch:
9
-
10
-jobs:
11
-  build:
12
-    runs-on: ${{ matrix.os }}
13
-    strategy:
14
-      matrix:
15
-        os: [ubuntu-latest]
16
-        architecture: [x64, arm64]
17
-    steps:
18
-      - name: Checkout code
19
-        uses: actions/checkout@v2
20
-
21
-      - name: Shai-Hulud 2.0 Detector
22
-        uses: gensecaihq/Shai-Hulud-2.0-Detector@v1.0.0
23
-
24
-      - name: Install Docker Compose
25
-        run: |
26
-          sudo apt-get update
27
-          sudo apt-get install -y docker-compose
28
-          docker-compose --version
29
-
30
-      - name: Set up opensearch directory
31
-        run: chmod -R 777 shuffle-database
32
-
33
-      - name: Build the stack
34
-        run: docker compose up -d
35
-
36
-      - name: Wait for 30 seconds
37
-        run: sleep 30
38
-
39
-      - name: Check for restarting containers in a loop and fixing perms again
40
-        run: |
41
-          # echo "Changing permissions on shuffle-database directory again"
42
-          # chmod -R 777 shuffle-database
43
-
44
-          ATTEMPTS=30  # Total time = ATTEMPTS * 5 seconds = 30 seconds
45
-          for i in $(seq 1 $ATTEMPTS); do
46
-            RESTARTING_CONTAINERS=$(docker ps --filter "status=restarting" --format "{{.Names}}")
47
-            if [ -n "$RESTARTING_CONTAINERS" ]; then
48
-              echo "The following containers are restarting:"
49
-              echo "$RESTARTING_CONTAINERS"
50
-              exit 1
51
-            fi
52
-            echo "No containers are restarting. Attempt $i/$ATTEMPTS."
53
-            sleep 1
54
-          done
55
-          echo "No containers were found in a restarting state after $ATTEMPTS checks."
56
-
57
-      - name: Check if the response from the frontend contains the word "Shuffle"
58
-        run: |
59
-          RESPONSE=$(curl -s http://localhost:3001)
60
-          if echo "$RESPONSE" | grep -q "Shuffle"; then
61
-            echo "The word 'Shuffle' was found in the response."
62
-          else
63
-            echo "The word 'Shuffle' was not found in the response."
64
-            exit 1
65
-          fi
66
-      - name: Register a user and check the status code
67
-        run: |
68
-          MAX_RETRIES=30
69
-          RETRY_INTERVAL=10
70
-          CONTAINER_NAME="shuffle-backend"
71
-
72
-          for (( i=1; i<=$MAX_RETRIES; i++ ))
73
-          do
74
-            STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" 'http://localhost:3001/api/v1/register' \
75
-              -H 'Accept: */*' \
76
-              -H 'Accept-Language: en-US,en;q=0.9' \
77
-              -H 'Connection: keep-alive' \
78
-              -H 'Content-Type: application/json' \
79
-              --data-raw '{"username":"demo@demo.io","password":"supercoolpassword"}')
80
-
81
-            if [ "$STATUS_CODE" -eq 200 ]; then
82
-              echo "User registration was successful with status code 200."
83
-              exit 0
84
-            elif [ "$STATUS_CODE" -ne 502 ]; then
85
-              echo "User registration failed with status code $STATUS_CODE."
86
-              exit 1
87
-            fi
88
-
89
-            echo "Received status code $STATUS_CODE. Retrying in $RETRY_INTERVAL seconds... ($i/$MAX_RETRIES)"
90
-            echo "Fetching last 30 lines of logs from container $CONTAINER_NAME..."
91
-
92
-            logs_output=$(docker logs --tail 30 "$CONTAINER_NAME")
93
-            echo "$logs_output"
94
-
95
-            echo "Fetching last 30 lines of logs from container shuffle-opensearch..."
96
-
97
-            opensearch_logs=$(docker logs --tail 30 shuffle-opensearch)
98
-            echo "$opensearch_logs"
99
-
100
-            sleep $RETRY_INTERVAL
101
-          done
102
-
103
-          echo "User registration failed after $MAX_RETRIES attempts."
104
-          exit 1
105
-
106
-      - name: Run Selenium testing for frontend
107
-        run: |
108
-          cd $GITHUB_WORKSPACE/frontend
109
-          # write some log to see the current directory
110
-          chmod +x frontend-testing.sh
111
-          ./frontend-testing.sh
112
-
113
-      - name: Get the API key and run a health check
114
-        id: health_check
115
-        run: |
116
-          RESPONSE=$(curl -s -k -u admin:StrongShufflePassword321! 'https://localhost:9200/users/_search')
117
-          echo "Raw Response: $RESPONSE"
118
-
119
-          API_KEY=$(echo "$RESPONSE" | jq -r '.hits.hits[0]._source.apikey')
120
-          if [ -n "$API_KEY" ] && [ "$API_KEY" != "null" ]; then
121
-            echo "Admin API key: $API_KEY"
122
-            echo "API_KEY=$API_KEY" >> $GITHUB_ENV
123
-          else
124
-            echo "Failed to retrieve the API key for the admin user."
125
-            exit 1
126
-          fi
127
-
128
-          echo "Waiting 1 minute before sending the health API request..."
129
-          sleep 60
130
-
131
-          echo "Checking health API..."
132
-          HEALTH_RESPONSE=$(curl -s 'http://localhost:3001/api/v1/health?force=true' \
133
-            -H "Authorization: Bearer $API_KEY")
134
-
135
-          echo "Health API Response: $HEALTH_RESPONSE"
136
-
137
-          WORKFLOWS_CREATE=$(echo "$HEALTH_RESPONSE" | jq -r '.workflows.create')
138
-          WORKFLOWS_RUN=$(echo "$HEALTH_RESPONSE" | jq -r '.workflows.run')
139
-          WORKFLOWS_RUN_FINISHED=$(echo "$HEALTH_RESPONSE" | jq -r '.workflows.run_finished')
140
-          WORKFLOWS_RUN_STATUS=$(echo "$HEALTH_RESPONSE" | jq -r '.workflows.run_status')
141
-          WORKFLOWS_DELETE=$(echo "$HEALTH_RESPONSE" | jq -r '.workflows.delete')
142
-
143
-          echo "WORKFLOWS_CREATE: $WORKFLOWS_CREATE"
144
-          echo "WORKFLOWS_RUN: $WORKFLOWS_RUN"
145
-          echo "WORKFLOWS_RUN_FINISHED: $WORKFLOWS_RUN_FINISHED"
146
-          echo "WORKFLOWS_RUN_STATUS: $WORKFLOWS_RUN_STATUS"
147
-          echo "WORKFLOWS_DELETE: $WORKFLOWS_DELETE"
148
-
149
-          if [ "$WORKFLOWS_CREATE" = "true" ] && [ "$WORKFLOWS_RUN" = "true" ] && [ "$WORKFLOWS_RUN_FINISHED" = "true" ] && [ "$WORKFLOWS_RUN_STATUS" = "FINISHED" ] && [ "$WORKFLOWS_DELETE" = "true" ]; then
150
-            echo "Health endpoint check was successful."
151
-            exit 0
152
-          else
153
-            echo "Health check failed."
154
-            exit 1
155
-          fi
156
-
157
-  notify:
158
-    needs: build
159
-    if: failure()
160
-    runs-on: ubuntu-latest
161
-    steps:
162
-      - name: Send Twilio Alert
163
-        run: |
164
-          WORKFLOW_URL="https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
165
-          MESSAGE="🚨 Shuffle Workflow Failed!
166
-          Repository: ${{ github.repository }}
167
-          Branch: ${{ github.ref_name }}
168
-          Workflow URL: $WORKFLOW_URL"
169
-
170
-          curl -s -X POST https://api.twilio.com/2010-04-01/Accounts/${{ secrets.TWILIO_ACCOUNT_SID }}/Messages.json \
171
-              --data-urlencode "To=${{ secrets.TWILIO_TO_NUMBER }}" \
172
-              --data-urlencode "From=${{ secrets.TWILIO_FROM_NUMBER }}" \
173
-              --data-urlencode "Body=$MESSAGE" \
174
-              -u "${{ secrets.TWILIO_ACCOUNT_SID }}:${{ secrets.TWILIO_AUTH_TOKEN }}" > /dev/null

+ 0 - 92
Shuffle/.github/workflows/tagged-nightly-release.yaml

@@ -1,92 +0,0 @@
1
-name: Tagged Nightly Release
2
-on:
3
-  release:
4
-    types: [published]
5
-    branches:
6
-      - nightly
7
-
8
-# there is a very clear point to this existing.
9
-# we want to also release versions that look like this: 
10
-# v2.1.0-nightly-date, v2.1.0-nightly-date-1, v2.1.0-nightly-date-2
11
-# we NEVER want to send customers a "nightly" tag. We always want to send them 
12
-# a tagged nightly tag. So that when something breaks, They can always
13
-# point to it.
14
-
15
-jobs:
16
-  main:
17
-    runs-on: ubuntu-latest
18
-    continue-on-error: ${{ matrix.experimental }}
19
-    strategy:
20
-      fail-fast: false
21
-      matrix:
22
-        include:
23
-          - app: frontend
24
-            path: frontend
25
-            experimental: true
26
-          - app: backend
27
-            path: backend
28
-            experimental: true
29
-          - app: app_sdk
30
-            path: backend/app_sdk
31
-            experimental: true
32
-          - app: orborus
33
-            path: functions/onprem/orborus
34
-            experimental: true
35
-          - app: worker
36
-            path: functions/onprem/worker
37
-            experimental: true
38
-    steps:
39
-      - name: Checkout
40
-        uses: actions/checkout@v3
41
-      
42
-      - name: Set version
43
-        id: set_version
44
-        run: |
45
-          if [[ ${{ github.event_name }} == 'release' ]]; then
46
-            echo "VERSION=${{ github.event.release.tag_name }}" >> $GITHUB_OUTPUT
47
-          else
48
-            echo "VERSION=nightly-untagged-latest" >> $GITHUB_OUTPUT
49
-          fi
50
-
51
-      - name: Set up Docker Buildx
52
-        uses: docker/setup-buildx-action@v3
53
-
54
-      - name: Set up QEMU
55
-        uses: docker/setup-qemu-action@v3
56
-        with:
57
-          platforms: "amd64,arm64,arm"
58
-
59
-      - name: Login to DockerHub
60
-        uses: docker/login-action@v3
61
-        with:
62
-          username: ${{ secrets.DOCKERHUB_USERNAME }}
63
-          password: ${{ secrets.DOCKERHUB_TOKEN }}
64
-
65
-      - name: Login to Ghcr
66
-        uses: docker/login-action@v3
67
-        with:
68
-          registry: ghcr.io
69
-          username: ${{ github.actor }}
70
-          password: ${{ secrets.GITHUB_TOKEN }}
71
-
72
-      - name: Ghcr Build and push
73
-        id: docker_build
74
-        uses: docker/build-push-action@v4
75
-        env:
76
-          BUILDX_NO_DEFAULT_LOAD: true
77
-        with:
78
-          logout: false
79
-          context: ${{ matrix.path }}/
80
-          file: ${{ matrix.path }}/Dockerfile
81
-          platforms: linux/amd64,linux/arm64
82
-          push: true
83
-          cache-from: type=local,src=/tmp/.buildx-cache
84
-          cache-to: type=local,dest=/tmp/.buildx-cache
85
-          tags: |
86
-            ghcr.io/shuffle/shuffle-${{ matrix.app }}:${{ steps.set_version.outputs.VERSION }}
87
-            ${{ secrets.DOCKERHUB_USERNAME }}/shuffle-${{ matrix.app }}:${{ steps.set_version.outputs.VERSION }}
88
-            frikky/shuffle-${{ matrix.app }}:${{ steps.set_version.outputs.VERSION }}
89
-            frikky/shuffle:${{ matrix.app }}
90
-
91
-      - name: Image digest
92
-        run: echo ${{ steps.docker_build.outputs.digest }}

+ 0 - 32
Shuffle/.gitignore

@@ -1,32 +0,0 @@
1
-*node_modules/
2
-*build/
3
-*.lock
4
-*.swo
5
-*.swp
6
-*.swn
7
-*__pycache__*
8
-
9
-Shuffle-*.json
10
-backend/go-app/generated*
11
-
12
-functions/generated_apps
13
-*.zip
14
-
15
-*openapi-parsers/generated/*
16
-*openapi-parsers/other/*
17
-
18
-
19
-backend/onprem/app_sdk/apps
20
-*test.py
21
-
22
-*.exe
23
-*debug*
24
-!**/RuntimeDebugger.jsx
25
-
26
-shuffle-database/batch_metrics_enabled.conf
27
-shuffle-database/logging_enabled.conf
28
-shuffle-database/nodes
29
-shuffle-database/performance_analyzer_enabled.conf
30
-shuffle-database/rca_enabled.conf
31
-
32
-#*/package-lock.json

+ 103 - 0
iris-web/.env

@@ -0,0 +1,103 @@
1
+# -- COMMON
2
+LOG_LEVEL=info
3
+
4
+# -- NGINX
5
+NGINX_IMAGE_NAME=ghcr.io/dfir-iris/iriswebapp_nginx
6
+NGINX_IMAGE_TAG=latest
7
+
8
+SERVER_NAME=iris.app.dev
9
+KEY_FILENAME=iris_dev_key.pem
10
+CERT_FILENAME=iris_dev_cert.pem
11
+
12
+# -- DATABASE
13
+DB_IMAGE_NAME=ghcr.io/dfir-iris/iriswebapp_db
14
+DB_IMAGE_TAG=latest
15
+
16
+POSTGRES_USER=postgres
17
+POSTGRES_PASSWORD=__MUST_BE_CHANGED__
18
+POSTGRES_ADMIN_USER=raptor
19
+POSTGRES_ADMIN_PASSWORD=__MUST_BE_CHANGED__
20
+POSTGRES_DB=iris_db
21
+
22
+POSTGRES_SERVER=db
23
+POSTGRES_PORT=5432
24
+
25
+# -- IRIS
26
+APP_IMAGE_NAME=ghcr.io/dfir-iris/iriswebapp_app
27
+APP_IMAGE_TAG=latest
28
+
29
+DOCKERIZED=1
30
+
31
+IRIS_SECRET_KEY=AVerySuperSecretKey-SoNotThisOne
32
+IRIS_SECURITY_PASSWORD_SALT=ARandomSalt-NotThisOneEither
33
+IRIS_UPSTREAM_SERVER=app
34
+IRIS_UPSTREAM_PORT=8000
35
+
36
+IRIS_FRONTEND_SERVER=frontend
37
+IRIS_FRONTEND_PORT=5173
38
+
39
+IRIS_SVELTEKIT_FRONTEND_DIR=../iris-frontend
40
+
41
+
42
+# -- WORKER
43
+CELERY_BROKER=amqp://rabbitmq
44
+
45
+# -- AUTH
46
+IRIS_AUTHENTICATION_TYPE=local
47
+## optional
48
+IRIS_ADM_PASSWORD=MySuperAdminPassword!
49
+#IRIS_ADM_API_KEY=B8BA5D730210B50F41C06941582D7965D57319D5685440587F98DFDC45A01594
50
+#IRIS_ADM_EMAIL=admin@localhost
51
+IRIS_ADM_USERNAME=administrator
52
+# requests the just-in-time creation of users with ldap authentification (see https://github.com/dfir-iris/iris-web/issues/203)
53
+#IRIS_AUTHENTICATION_CREATE_USER_IF_NOT_EXIST=True
54
+# the group to which newly created users are initially added, default value is Analysts
55
+#IRIS_NEW_USERS_DEFAULT_GROUP=
56
+
57
+# -- FOR LDAP AUTHENTICATION
58
+#IRIS_AUTHENTICATION_TYPE=ldap
59
+#LDAP_SERVER=127.0.0.1
60
+#LDAP_AUTHENTICATION_TYPE=SIMPLE
61
+#LDAP_PORT=3890
62
+#LDAP_USER_PREFIX=uid=
63
+#LDAP_USER_SUFFIX=ou=people,dc=example,dc=com
64
+#LDAP_USE_SSL=False
65
+# base DN in which to search for users
66
+#LDAP_SEARCH_DN=ou=users,dc=example,dc=org
67
+# unique identifier to search the user
68
+#LDAP_ATTRIBUTE_IDENTIFIER=cn
69
+# name of the attribute to retrieve the user's display name
70
+#LDAP_ATTRIBUTE_DISPLAY_NAME=displayName
71
+# name of the attribute to retrieve the user's email address
72
+#LDAP_ATTRIBUTE_MAIL=mail
73
+#LDAP_VALIDATE_CERTIFICATE=True
74
+#LDAP_TLS_VERSION=1.2
75
+#LDAP_SERVER_CERTIFICATE=
76
+#LDAP_PRIVATE_KEY=
77
+#LDAP_PRIVATE_KEY_PASSWORD=
78
+
79
+# -- FOR OIDC AUTHENTICATION
80
+# IRIS_AUTHENTICATION_TYPE=oidc
81
+# OIDC_ISSUER_URL=
82
+# OIDC_CLIENT_ID=
83
+# OIDC_CLIENT_SECRET=
84
+# endpoints only required if provider doesn't support metadata discovery
85
+# OIDC_AUTH_ENDPOINT=
86
+# OIDC_TOKEN_ENDPOINT=
87
+# optional to include logout from oidc provider
88
+# OIDC_END_SESSION_ENDPOINT=
89
+# OIDC redirect URL for your IDP: https://<IRIS_SERVER_NAME>/oidc-authorize
90
+
91
+# -- LISTENING PORT
92
+INTERFACE_HTTPS_PORT=443
93
+
94
+# -- FOR OIDC AUTHENTICATION
95
+#IRIS_AUTHENTICATION_TYPE=oidc
96
+#OIDC_ISSUER_URL=
97
+#OIDC_CLIENT_ID=
98
+#OIDC_CLIENT_SECRET=
99
+# endpoints only required if provider doesn't support metadata discovery
100
+#OIDC_AUTH_ENDPOINT=
101
+#OIDC_TOKEN_ENDPOINT=
102
+# optional to include logout from oidc provider
103
+#OIDC_END_SESSION_ENDPOINT=

+ 103 - 0
iris-web/.env.model

@@ -0,0 +1,103 @@
1
+# -- COMMON
2
+LOG_LEVEL=info
3
+
4
+# -- NGINX
5
+NGINX_IMAGE_NAME=ghcr.io/dfir-iris/iriswebapp_nginx
6
+NGINX_IMAGE_TAG=latest
7
+
8
+SERVER_NAME=iris.app.dev
9
+KEY_FILENAME=iris_dev_key.pem
10
+CERT_FILENAME=iris_dev_cert.pem
11
+
12
+# -- DATABASE
13
+DB_IMAGE_NAME=ghcr.io/dfir-iris/iriswebapp_db
14
+DB_IMAGE_TAG=latest
15
+
16
+POSTGRES_USER=postgres
17
+POSTGRES_PASSWORD=__MUST_BE_CHANGED__
18
+POSTGRES_ADMIN_USER=raptor
19
+POSTGRES_ADMIN_PASSWORD=__MUST_BE_CHANGED__
20
+POSTGRES_DB=iris_db
21
+
22
+POSTGRES_SERVER=db
23
+POSTGRES_PORT=5432
24
+
25
+# -- IRIS
26
+APP_IMAGE_NAME=ghcr.io/dfir-iris/iriswebapp_app
27
+APP_IMAGE_TAG=latest
28
+
29
+DOCKERIZED=1
30
+
31
+IRIS_SECRET_KEY=AVerySuperSecretKey-SoNotThisOne
32
+IRIS_SECURITY_PASSWORD_SALT=ARandomSalt-NotThisOneEither
33
+IRIS_UPSTREAM_SERVER=app
34
+IRIS_UPSTREAM_PORT=8000
35
+
36
+IRIS_FRONTEND_SERVER=frontend
37
+IRIS_FRONTEND_PORT=5173
38
+
39
+IRIS_SVELTEKIT_FRONTEND_DIR=../iris-frontend
40
+
41
+
42
+# -- WORKER
43
+CELERY_BROKER=amqp://rabbitmq
44
+
45
+# -- AUTH
46
+IRIS_AUTHENTICATION_TYPE=local
47
+## optional
48
+#IRIS_ADM_PASSWORD=MySuperAdminPassword!
49
+#IRIS_ADM_API_KEY=B8BA5D730210B50F41C06941582D7965D57319D5685440587F98DFDC45A01594
50
+#IRIS_ADM_EMAIL=admin@localhost
51
+#IRIS_ADM_USERNAME=administrator
52
+# requests the just-in-time creation of users with ldap authentification (see https://github.com/dfir-iris/iris-web/issues/203)
53
+#IRIS_AUTHENTICATION_CREATE_USER_IF_NOT_EXIST=True
54
+# the group to which newly created users are initially added, default value is Analysts
55
+#IRIS_NEW_USERS_DEFAULT_GROUP=
56
+
57
+# -- FOR LDAP AUTHENTICATION
58
+#IRIS_AUTHENTICATION_TYPE=ldap
59
+#LDAP_SERVER=127.0.0.1
60
+#LDAP_AUTHENTICATION_TYPE=SIMPLE
61
+#LDAP_PORT=3890
62
+#LDAP_USER_PREFIX=uid=
63
+#LDAP_USER_SUFFIX=ou=people,dc=example,dc=com
64
+#LDAP_USE_SSL=False
65
+# base DN in which to search for users
66
+#LDAP_SEARCH_DN=ou=users,dc=example,dc=org
67
+# unique identifier to search the user
68
+#LDAP_ATTRIBUTE_IDENTIFIER=cn
69
+# name of the attribute to retrieve the user's display name
70
+#LDAP_ATTRIBUTE_DISPLAY_NAME=displayName
71
+# name of the attribute to retrieve the user's email address
72
+#LDAP_ATTRIBUTE_MAIL=mail
73
+#LDAP_VALIDATE_CERTIFICATE=True
74
+#LDAP_TLS_VERSION=1.2
75
+#LDAP_SERVER_CERTIFICATE=
76
+#LDAP_PRIVATE_KEY=
77
+#LDAP_PRIVATE_KEY_PASSWORD=
78
+
79
+# -- FOR OIDC AUTHENTICATION
80
+# IRIS_AUTHENTICATION_TYPE=oidc
81
+# OIDC_ISSUER_URL=
82
+# OIDC_CLIENT_ID=
83
+# OIDC_CLIENT_SECRET=
84
+# endpoints only required if provider doesn't support metadata discovery
85
+# OIDC_AUTH_ENDPOINT=
86
+# OIDC_TOKEN_ENDPOINT=
87
+# optional to include logout from oidc provider
88
+# OIDC_END_SESSION_ENDPOINT=
89
+# OIDC redirect URL for your IDP: https://<IRIS_SERVER_NAME>/oidc-authorize
90
+
91
+# -- LISTENING PORT
92
+INTERFACE_HTTPS_PORT=443
93
+
94
+# -- FOR OIDC AUTHENTICATION
95
+#IRIS_AUTHENTICATION_TYPE=oidc
96
+#OIDC_ISSUER_URL=
97
+#OIDC_CLIENT_ID=
98
+#OIDC_CLIENT_SECRET=
99
+# endpoints only required if provider doesn't support metadata discovery
100
+#OIDC_AUTH_ENDPOINT=
101
+#OIDC_TOKEN_ENDPOINT=
102
+# optional to include logout from oidc provider
103
+#OIDC_END_SESSION_ENDPOINT=

+ 0 - 2
iris-web/.github/FUNDING.yml

@@ -1,2 +0,0 @@
1
-github: [whikernel]
2
-open_collective: dfir-iris

+ 0 - 38
iris-web/.github/ISSUE_TEMPLATE/bug_report.md

@@ -1,38 +0,0 @@
1
----
2
-name: Bug report
3
-about: Create a report to help us improve
4
-title: "[BUG] "
5
-labels: bug
6
-assignees: ''
7
-
8
----
9
-
10
-**Describe the bug**
11
-A clear and concise description of what the bug is.
12
-
13
-**To Reproduce**
14
-Steps to reproduce the behavior:
15
-1. Go to '...'
16
-2. Click on '....'
17
-3. Scroll down to '....'
18
-4. See error
19
-
20
-**Expected behavior**
21
-A clear and concise description of what you expected to happen.
22
-
23
-**Screenshots**
24
-If applicable, add screenshots to help explain your problem.
25
-
26
-**Desktop (please complete the following information):**
27
- - OS: [e.g. iOS]
28
- - Browser [e.g. chrome, safari]
29
- - Version [e.g. 22]
30
-
31
-**Smartphone (please complete the following information):**
32
- - Device: [e.g. iPhone6]
33
- - OS: [e.g. iOS8.1]
34
- - Browser [e.g. stock browser, safari]
35
- - Version [e.g. 22]
36
-
37
-**Additional context**
38
-Add any other context about the problem here.

+ 0 - 22
iris-web/.github/ISSUE_TEMPLATE/feature_request.md

@@ -1,22 +0,0 @@
1
----
2
-name: Feature request
3
-about: Suggest an idea for this project
4
-title: "[FR]"
5
-labels: enhancement
6
-assignees: ''
7
-
8
----
9
-
10
-*Please ensure your feature request is not already on the roadmap or associated with an issue. This can be checked [here](https://github.com/orgs/dfir-iris/projects/1/views/4).*
11
-
12
-**Is your feature request related to a problem? Please describe.**
13
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
14
-
15
-**Describe the solution you'd like**
16
-A clear and concise description of what you want to happen.
17
-
18
-**Describe alternatives you've considered**
19
-A clear and concise description of any alternative solutions or features you've considered.
20
-
21
-**Additional context**
22
-Add any other context or screenshots about the feature request here.

+ 0 - 63
iris-web/.github/workflows/build-db.yml

@@ -1,63 +0,0 @@
1
-name: Build and push DB image
2
-
3
-on:
4
-  push:
5
-    # Publish semver tags as releases.
6
-    tags: [ 'v*.*.*' ]
7
-  pull_request:
8
-    branches: [ "main" ]
9
-  workflow_dispatch:
10
-
11
-env:
12
-  REGISTRY: ghcr.io
13
-  IMAGE_NAME: ${{ github.repository }}
14
-
15
-jobs:
16
-  build-db:
17
-    runs-on: ubuntu-latest
18
-
19
-    permissions:
20
-      packages: write
21
-      contents: read
22
-
23
-    steps:
24
-      - run: |
25
-          echo "The job was automatically triggered by a ${{ github.event_name }} event."
26
-          echo "This job is now running on a ${{ runner.os }} server hosted by github!"
27
-          echo "The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
28
-
29
-      - name: Check out repository code
30
-        uses: actions/checkout@v4
31
-
32
-      - name: Set up QEMU
33
-        uses: docker/setup-qemu-action@v3
34
-        with:
35
-          platforms: 'arm64,amd64'
36
-
37
-      - name: Set up Docker Buildx
38
-        uses: docker/setup-buildx-action@v3
39
-
40
-      - name: Log into registry ${{ env.REGISTRY }}
41
-        if: github.event_name != 'pull_request'
42
-        uses: docker/login-action@v3
43
-        with:
44
-          registry: ${{ env.REGISTRY }}
45
-          username: ${{ github.actor }}
46
-          password: ${{ secrets.GITHUB_TOKEN }}
47
-
48
-      - name: Extract Docker metadata
49
-        id: meta
50
-        uses: docker/metadata-action@v5
51
-        with:
52
-          images: ${{ env.REGISTRY }}/dfir-iris/iriswebapp_db
53
-
54
-      - name: Build and push
55
-        uses: docker/build-push-action@v5
56
-        with:
57
-          context: docker/db
58
-          platforms: linux/amd64,linux/arm64
59
-          push: ${{ github.event_name != 'pull_request' }} # Don't push on PR
60
-          tags: ${{ steps.meta.outputs.tags }}
61
-          labels: ${{ steps.meta.outputs.labels }}
62
-          cache-from: type=gha
63
-          cache-to: type=gha,mode=max

+ 0 - 66
iris-web/.github/workflows/build-nginx.yml

@@ -1,66 +0,0 @@
1
-name: Build and push Nginx image
2
-
3
-on:
4
-  push:
5
-    # Publish semver tags as releases.
6
-    tags: [ 'v*.*.*' ]
7
-  pull_request:
8
-    branches: [ "main" ]
9
-  workflow_dispatch:
10
-
11
-env:
12
-  REGISTRY: ghcr.io
13
-  IMAGE_NAME: ${{ github.repository }}
14
-
15
-jobs:
16
-  build-db:
17
-    runs-on: ubuntu-latest
18
-
19
-    permissions:
20
-      packages: write
21
-      contents: read
22
-
23
-    steps:
24
-      - run: |
25
-          echo "The job was automatically triggered by a ${{ github.event_name }} event."
26
-          echo "This job is now running on a ${{ runner.os }} server hosted by github!"
27
-          echo "The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
28
-
29
-      - name: Check out repository code
30
-        uses: actions/checkout@v4
31
-
32
-      - name: Set up QEMU
33
-        uses: docker/setup-qemu-action@v3
34
-        with:
35
-          platforms: 'arm64,amd64'
36
-
37
-      - name: Set up Docker Buildx
38
-        uses: docker/setup-buildx-action@v3
39
-
40
-      - name: Log into registry ${{ env.REGISTRY }}
41
-        if: github.event_name != 'pull_request'
42
-        uses: docker/login-action@v3
43
-        with:
44
-          registry: ${{ env.REGISTRY }}
45
-          username: ${{ github.actor }}
46
-          password: ${{ secrets.GITHUB_TOKEN }}
47
-
48
-      - name: Extract Docker metadata
49
-        id: meta
50
-        uses: docker/metadata-action@v5
51
-        with:
52
-          images: ${{ env.REGISTRY }}/dfir-iris/iriswebapp_nginx
53
-
54
-      - name: Build and push
55
-        uses: docker/build-push-action@v5
56
-        with:
57
-          context: docker/nginx
58
-          platforms: linux/amd64,linux/arm64
59
-          push: ${{ github.event_name != 'pull_request' }} # Don't push on PR
60
-          tags: ${{ steps.meta.outputs.tags }}
61
-          labels: ${{ steps.meta.outputs.labels }}
62
-          cache-from: type=gha
63
-          cache-to: type=gha,mode=max
64
-          build-args: |
65
-            NGINX_CONF_GID=1234
66
-            NGINX_CONF_FILE=nginx.conf

+ 0 - 64
iris-web/.github/workflows/build-webApp.yml

@@ -1,64 +0,0 @@
1
-name: Build and push WebApp image
2
-
3
-on:
4
-  push:
5
-    # Publish semver tags as releases.
6
-    tags: [ 'v*.*.*' ]
7
-  pull_request:
8
-    branches: [ "main" ]
9
-  workflow_dispatch:
10
-
11
-env:
12
-  REGISTRY: ghcr.io
13
-  IMAGE_NAME: ${{ github.repository }}
14
-
15
-jobs:
16
-  build-webapp:
17
-    runs-on: ubuntu-latest
18
-
19
-    permissions:
20
-      packages: write
21
-      contents: read
22
-
23
-    steps:
24
-      - run: |
25
-          echo "The job was automatically triggered by a ${{ github.event_name }} event."
26
-          echo "This job is now running on a ${{ runner.os }} server hosted by github!"
27
-          echo "The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
28
-
29
-      - name: Check out repository code
30
-        uses: actions/checkout@v4
31
-
32
-      - name: Set up QEMU
33
-        uses: docker/setup-qemu-action@v3
34
-        with:
35
-          platforms: 'arm64,amd64'
36
-
37
-      - name: Set up Docker Buildx
38
-        uses: docker/setup-buildx-action@v3
39
-
40
-      - name: Log into registry ${{ env.REGISTRY }}
41
-        if: github.event_name != 'pull_request'
42
-        uses: docker/login-action@v3
43
-        with:
44
-          registry: ${{ env.REGISTRY }}
45
-          username: ${{ github.actor }}
46
-          password: ${{ secrets.GITHUB_TOKEN }}
47
-
48
-      - name: Extract Docker metadata
49
-        id: meta
50
-        uses: docker/metadata-action@v5
51
-        with:
52
-          images: ${{ env.REGISTRY }}/dfir-iris/iriswebapp_app
53
-
54
-      - name: Build and push
55
-        uses: docker/build-push-action@v5
56
-        with:
57
-          context: .
58
-          platforms: linux/amd64,linux/arm64
59
-          push: ${{ github.event_name != 'pull_request' }} # Don't push on PR
60
-          tags: ${{ steps.meta.outputs.tags }}
61
-          labels: ${{ steps.meta.outputs.labels }}
62
-          cache-from: type=gha
63
-          cache-to: type=gha,mode=max
64
-          file: docker/webApp/Dockerfile

+ 0 - 33
iris-web/.github/workflows/chart-releaser.yml

@@ -1,33 +0,0 @@
1
-name: Release Iris Web Helm Chart
2
-
3
-on:
4
-  push:
5
-    branches:
6
-      - main
7
-  workflow_dispatch:
8
-
9
-jobs:
10
-  release:
11
-    permissions:
12
-      contents: write
13
-    runs-on: ubuntu-latest
14
-    steps:
15
-      - name: Checkout
16
-        uses: actions/checkout@v4
17
-        with:
18
-          fetch-depth: 0
19
-
20
-      - name: Configure Git
21
-        run: |
22
-          git config user.name "$GITHUB_ACTOR"
23
-          git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
24
-
25
-      - name: Install Helm
26
-        uses: azure/setup-helm@v4
27
-
28
-      - name: Run chart-releaser
29
-        uses: helm/chart-releaser-action@v1
30
-        with:
31
-            charts_dir: deploy/kubernetes
32
-        env:
33
-          CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

+ 0 - 238
iris-web/.github/workflows/ci.yml

@@ -1,238 +0,0 @@
1
-#  IRIS Source Code
2
-#  Copyright (C) 2023 - DFIR-IRIS
3
-#  contact@dfir-iris.org
4
-#
5
-#  This program is free software; you can redistribute it and/or
6
-#  modify it under the terms of the GNU Lesser General Public
7
-#  License as published by the Free Software Foundation; either
8
-#  version 3 of the License, or (at your option) any later version.
9
-#
10
-#  This program is distributed in the hope that it will be useful,
11
-#  but WITHOUT ANY WARRANTY; without even the implied warranty of
12
-#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
13
-#  Lesser General Public License for more details.
14
-#
15
-#  You should have received a copy of the GNU Lesser General Public License
16
-#  along with this program; if not, write to the Free Software Foundation,
17
-#  Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
18
-
19
-name: Continuous Integration
20
-on: [push, pull_request]
21
-
22
-jobs:
23
-
24
-  static-checks:
25
-    name: Static analyis checks
26
-    runs-on: ubuntu-22.04
27
-    steps:
28
-      - name: Check out iris
29
-        uses: actions/checkout@v4
30
-      - name: Check code with ruff
31
-        uses: astral-sh/ruff-action@v2
32
-        with:
33
-          args: check --output-format=github
34
-          src: ./source
35
-
36
-  build-docker-db:
37
-    name: Build docker db
38
-    runs-on: ubuntu-22.04
39
-    steps:
40
-      - name: Check out iris
41
-        uses: actions/checkout@v4
42
-      - name: Set up Docker Buildx
43
-        uses: docker/setup-buildx-action@v3
44
-      - name: Build and export
45
-        uses: docker/build-push-action@v6
46
-        with:
47
-          context: docker/db
48
-          tags: iriswebapp_db:develop
49
-          outputs: type=docker,dest=${{ runner.temp }}/iriswebapp_db.tar
50
-          cache-from: type=gha
51
-          cache-to: type=gha,mode=max
52
-      - name: Upload artifact
53
-        uses: actions/upload-artifact@v4
54
-        with:
55
-          name: iriswebapp_db
56
-          path: ${{ runner.temp }}/iriswebapp_db.tar
57
-
58
-  build-docker-nginx:
59
-    name: Build docker nginx
60
-    runs-on: ubuntu-22.04
61
-    steps:
62
-      - name: Check out iris
63
-        uses: actions/checkout@v4
64
-      - name: Set up Docker Buildx
65
-        uses: docker/setup-buildx-action@v3
66
-      - name: Build and export
67
-        uses: docker/build-push-action@v6
68
-        with:
69
-          context: docker/nginx
70
-          tags: iriswebapp_nginx:develop
71
-          build-args: |
72
-            NGINX_CONF_GID=1234
73
-            NGINX_CONF_FILE=nginx.conf
74
-          outputs: type=docker,dest=${{ runner.temp }}/iriswebapp_nginx.tar
75
-          cache-from: type=gha
76
-          cache-to: type=gha,mode=max
77
-      - name: Upload artifact
78
-        uses: actions/upload-artifact@v4
79
-        with:
80
-          name: iriswebapp_nginx
81
-          path: ${{ runner.temp }}/iriswebapp_nginx.tar
82
-
83
-  build-docker-app:
84
-    name: Build docker app
85
-    runs-on: ubuntu-22.04
86
-    steps:
87
-      - name: Check out iris
88
-        uses: actions/checkout@v4
89
-      - name: Set up Docker Buildx
90
-        uses: docker/setup-buildx-action@v3
91
-      - name: Build and export
92
-        uses: docker/build-push-action@v6
93
-        with:
94
-          context: .
95
-          file: docker/webApp/Dockerfile
96
-          tags: iriswebapp_app:develop
97
-          outputs: type=docker,dest=${{ runner.temp }}/iriswebapp_app.tar
98
-          cache-from: type=gha
99
-          cache-to: type=gha,mode=max
100
-      - name: Upload artifact
101
-        uses: actions/upload-artifact@v4
102
-        with:
103
-          name: iriswebapp_app
104
-          path: ${{ runner.temp }}/iriswebapp_app.tar
105
-
106
-  build-graphql-documentation:
107
-    name: Generate graphQL documentation
108
-    runs-on: ubuntu-22.04
109
-    needs:
110
-      - build-docker-db
111
-      - build-docker-nginx
112
-      - build-docker-app
113
-    steps:
114
-      - name: Download artifacts
115
-        uses: actions/download-artifact@v4
116
-        with:
117
-          pattern: iriswebapp_*
118
-          path: ${{ runner.temp }}
119
-          merge-multiple: true
120
-      - name: Load docker images
121
-        run: |
122
-          docker load --input ${{ runner.temp }}/iriswebapp_db.tar
123
-          docker load --input ${{ runner.temp }}/iriswebapp_nginx.tar
124
-          docker load --input ${{ runner.temp }}/iriswebapp_app.tar
125
-      - name: Check out iris
126
-        uses: actions/checkout@v4
127
-      - name: Start development server
128
-        run: |
129
-          # Even though, we use --env-file option when running docker compose, this is still necessary, because the compose has a env_file attribute :(
130
-          # TODO should move basic.env file, which is in directory tests, up. It's used in several places. Maybe, rename it into dev.env
131
-          cp tests/data/basic.env .env
132
-          docker compose --file docker-compose.dev.yml --env-file tests/data/basic.env up --detach
133
-      - name: Generate GraphQL documentation
134
-        run: |
135
-          npx spectaql@^3.0.2 source/spectaql/config.yml
136
-      - name: Stop development server
137
-        run: |
138
-          docker compose down
139
-      - uses: actions/upload-artifact@v4
140
-        with:
141
-            name: GraphQL DFIR-IRIS documentation
142
-            path: public
143
-            if-no-files-found: error
144
-
145
-  test-api:
146
-    name: Test API
147
-    runs-on: ubuntu-22.04
148
-    needs:
149
-      - build-docker-db
150
-      - build-docker-nginx
151
-      - build-docker-app
152
-    steps:
153
-      - name: Download artifacts
154
-        uses: actions/download-artifact@v4
155
-        with:
156
-          pattern: iriswebapp_*
157
-          path: ${{ runner.temp }}
158
-          merge-multiple: true
159
-      - name: Load docker images
160
-        run: |
161
-          docker load --input ${{ runner.temp }}/iriswebapp_db.tar
162
-          docker load --input ${{ runner.temp }}/iriswebapp_nginx.tar
163
-          docker load --input ${{ runner.temp }}/iriswebapp_app.tar
164
-      - name: Check out iris
165
-        uses: actions/checkout@v4
166
-      - name: Start development server
167
-        run: |
168
-          # Even though, we use --env-file option when running docker compose, this is still necessary, because the compose has a env_file attribute :(
169
-          # TODO should move basic.env file, which is in directory tests, up. It's used in several places. Maybe, rename it into dev.env
170
-          cp tests/data/basic.env .env
171
-          docker compose --file docker-compose.dev.yml up --detach
172
-      - name: Run tests
173
-        working-directory: tests
174
-        run: |
175
-          python -m venv venv
176
-          source venv/bin/activate
177
-          pip install -r requirements.txt
178
-          PYTHONUNBUFFERED=true python -m unittest --verbose
179
-      - name: Stop development server
180
-        run: |
181
-          docker compose down
182
-
183
-  test-e2e:
184
-    name: End to end tests
185
-    runs-on: ubuntu-22.04
186
-    needs:
187
-      - build-docker-db
188
-      - build-docker-nginx
189
-      - build-docker-app
190
-    steps:
191
-      - name: Download artifacts
192
-        uses: actions/download-artifact@v4
193
-        with:
194
-          pattern: iriswebapp_*
195
-          path: ${{ runner.temp }}
196
-          merge-multiple: true
197
-      - name: Load docker images
198
-        run: |
199
-          docker load --input ${{ runner.temp }}/iriswebapp_db.tar
200
-          docker load --input ${{ runner.temp }}/iriswebapp_nginx.tar
201
-          docker load --input ${{ runner.temp }}/iriswebapp_app.tar
202
-      - name: Check out iris
203
-        uses: actions/checkout@v4
204
-      - uses: actions/setup-node@v4
205
-        with:
206
-          node-version: 20
207
-          cache: npm
208
-          cache-dependency-path: |
209
-            ui/package-lock.json
210
-            e2e/package-lock.json
211
-      - name: Build ui to be mounted in development docker
212
-        working-directory: ui
213
-        run: |
214
-          npm ci
215
-          npm run build
216
-      - name: Install e2e dependencies
217
-        working-directory: e2e
218
-        run: npm ci
219
-      - name: Install playwright dependencies
220
-        working-directory: e2e
221
-        run: npx playwright install chromium firefox
222
-      - name: Start development server
223
-        run: |
224
-          # TODO should move basic.env file, which is in directory tests, up. It's used in several places. Maybe, rename it into dev.env
225
-          cp tests/data/basic.env .env
226
-          docker compose --file docker-compose.dev.yml up --detach
227
-      - name: Run end to end tests
228
-        working-directory: e2e
229
-        run: npx playwright test
230
-      - name: Stop development server
231
-        run: |
232
-          docker compose down
233
-      - uses: actions/upload-artifact@v4
234
-        if: ${{ always() }}
235
-        with:
236
-          name: playwright-report
237
-          path: e2e/playwright-report/
238
-

+ 0 - 32
iris-web/.gitignore

@@ -1,32 +0,0 @@
1
-flask/
2
-*.pyc
3
-dev
4
-node_modules
5
-source/app/database.db
6
-source/app/build
7
-yarn.lock
8
-yarn-error.log
9
-*.psd
10
-env/
11
-venv/
12
-test/
13
-source/app/config.priv.ini
14
-source/app/config.test.ini
15
-.idea/
16
-libesedb-*/
17
-orcparser/
18
-venv1/
19
-.DS_Store
20
-.env*
21
-.venv/
22
-.vscode/
23
-*.code-workspace
24
-nohup.out
25
-celerybeat-schedule.db
26
-.scannerwork/
27
-source/app/static/assets/dist/
28
-source/app/static/assets/img/graph/*
29
-!source/app/static/assets/img/graph/*.png
30
-run_nv_test.py
31
-!certificates/web_certificates/iris_dev_*
32
-certificates/web_certificates/*.pem

+ 23 - 0
iris-web/certificates/web_certificates/iris_dev_cert.pem

@@ -0,0 +1,23 @@
1
+-----BEGIN CERTIFICATE-----
2
+MIID5zCCAs+gAwIBAgIUR1vu5qp+qpHjXhQ/6sUnFOtgW2kwDQYJKoZIhvcNAQEL
3
+BQAwgYIxCzAJBgNVBAYTAkZSMRYwFAYDVQQIDA1JbGUgZGUgRnJhbmNlMQ4wDAYD
4
+VQQHDAVQYXJpczEYMBYGA1UECgwPQ1NJUlQtRlIgQWlyYnVzMRowGAYDVQQLDBFJ
5
+bmNpZGVudCBSZXNwb25zZTEVMBMGA1UEAwwMaXJpcy5hcHAuZGV2MB4XDTIxMTIw
6
+OTE0MTUyMloXDTIyMTIwOTE0MTUyMlowgYIxCzAJBgNVBAYTAkZSMRYwFAYDVQQI
7
+DA1JbGUgZGUgRnJhbmNlMQ4wDAYDVQQHDAVQYXJpczEYMBYGA1UECgwPQ1NJUlQt
8
+RlIgQWlyYnVzMRowGAYDVQQLDBFJbmNpZGVudCBSZXNwb25zZTEVMBMGA1UEAwwM
9
+aXJpcy5hcHAuZGV2MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAlZWm
10
+hypo/ZJMjmqHSBviR9pzYJYaiSlafeEUa/9LlBe4Ecov74XLVy+3TuG3w1YFzD1M
11
+57j+EJrcZcl5E67uIVreAtJNLdgqDyCk6nCk3BdGgEnhcmQCevLXaCsBH+Z9lBRy
12
+ruuTQAihq3QJztosTuI+so9AaZgSmOm17vL45S3QiFIPUB/Pgv60BfYkd0SV1V4Y
13
+709IKvlCXSixryA0hkqT12D6fNFDPqwbn1o7Ifd7qVqVxD0QS8Wf56PUD8J+41A7
14
+WLzSy/fNKAUOSoOyhWvdh7s5uciqJEXDMh1BvrpBSCmkmW8aprWVOr6yaugmBg58
15
+g4oaM0xWOcFFeIcdrQIDAQABo1MwUTAdBgNVHQ4EFgQUNavtDZIB1hMxp7X0pytN
16
+xACnEigwHwYDVR0jBBgwFoAUNavtDZIB1hMxp7X0pytNxACnEigwDwYDVR0TAQH/
17
+BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAlA9xbWAfoGVOLUY/nuQ+0slryO/e
18
+C2ChAHehtNKJa3DOGJXTOp1qJJdfuHAHZgplfSZp36N87eHfygFaR04RaHNNUVpg
19
+1vnADd0QvDwYiEbRyjLN+EFdxDcbcsqljUUfPMx1zjlA1Ff2dbCkOVYYfm5xDzoE
20
+weFx6inCtZ0pHqWdF5R77n4Rg3dmR/98dXM3nXhFevoAI7FqyauYFL0QFLXvIufg
21
+3zywJrolNLZrrbpkSJ9kWzIZn0OK4Q+5dSnpBEimBZSrJKbZhgS/uzCL5flezKTF
22
+LzHY0CRXC7nXO5dY2baBbIqRvYlCgbmaN4J505Fn6YSmwm3deCan2xyGHg==
23
+-----END CERTIFICATE-----

+ 28 - 0
iris-web/certificates/web_certificates/iris_dev_key.pem

@@ -0,0 +1,28 @@
1
+-----BEGIN PRIVATE KEY-----
2
+MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCVlaaHKmj9kkyO
3
+aodIG+JH2nNglhqJKVp94RRr/0uUF7gRyi/vhctXL7dO4bfDVgXMPUznuP4Qmtxl
4
+yXkTru4hWt4C0k0t2CoPIKTqcKTcF0aASeFyZAJ68tdoKwEf5n2UFHKu65NACKGr
5
+dAnO2ixO4j6yj0BpmBKY6bXu8vjlLdCIUg9QH8+C/rQF9iR3RJXVXhjvT0gq+UJd
6
+KLGvIDSGSpPXYPp80UM+rBufWjsh93upWpXEPRBLxZ/no9QPwn7jUDtYvNLL980o
7
+BQ5Kg7KFa92Huzm5yKokRcMyHUG+ukFIKaSZbxqmtZU6vrJq6CYGDnyDihozTFY5
8
+wUV4hx2tAgMBAAECggEAV+BbvYpvtZAOA5iXswgWjknKgFKOckfmDo99NNj9KJoq
9
+m+Dg+mDqjWTN1ryJ/Wp663qTxIoMT+r6UZ3j0GlzIgtE4/lyN92HD+4IlGXqpBXU
10
+aCd/F3mjb2FcpKim93usCKNeoF5q2jJ378aywF+xqgIF/VZk6+PYARdDt4XsLI4w
11
+vfJSbjRuynnSHl3kD2atcivAxYDu6AggQPsSPmF66z754eKA3BJIAWRUCdx/llTk
12
+ARizLI4DFHKSYZq9pcKNtCrPIOrUkflG9QPZKn9dI0W+AaSroyqOQQvMY1NT3uEo
13
+TsXYoyxHGH7+tkoaSHX6JteDe4YbZFbQ7z1s6ZHMIQKBgQDHYoS5wpCxOpCUCOxU
14
+4s05n88tOJE7PnUQ1j1AosFes6qY1IFnsUss9ikF0aIL+gYvuEIU1z/ITDmBgWao
15
+bJq6ySCHhyqOMZxMK+nuwIJQEmmIImfxEs8Hf891Cej5NO964VWIsBtNln10yLrj
16
+Rc9J8J643O6YLyGuXDyXdxNcqQKBgQDADxnzPcou1wmlMK6zc7UeZ/bhi6hdYsWU
17
+X3znd5jQZ8576A38g1v742A2yfWIftnNiMrCdwnS8x9Rw1ps160E5MvzEUOckqig
18
+zJXn3PvO7tnReu4/Z4HoTUcbRtbBNMaIFgbW62A4S9CyiFZf9dONHoqhpYvbNJPx
19
+kjGp6Ol3ZQKBgEzz2yIO0+VzIwXfg8cnWenZog5j/LmO24PKDA38QwGX+knOCrvI
20
+k6kgwKh8Rjy1HNoiFW8RvI5DzRYMqWBrujRJGAL2yhfjUd2cPUdmiWT6Fjzyeody
21
+qPDOBXW4g3BbW+pjOa3tujvxzy3ZozfAY8a31aqnqnaWCjvPYZtb298xAoGAYbIM
22
+2D+xLhxyqpXV+DC+jAYEfnylG0PYD353MeMTV8fGMB89phpH2xyxX41iGZm1Pyj7
23
+Qup8k9LaNqQxxjX7rAaafD1m8ClmH82R34z4hi3XnQh0UspbOYi9x/FD4qnu52CV
24
+ABRhMKHYOkjB7zRD9X/4svtb5hibvQFJxA1XXUUCgYBaeZ7tZb8lWkd8v9uZ99qX
25
+wpm2bO+RQpOeNkP31VpO3jj9/0S+SJSRc9a3JnNRLKNtKhNZpTlaP0coBqZqwb+u
26
+gWAvdeZinxFwRj6VXvS8+2SP7ImRL1HgOwDQxDWXQxf3e3Zg7QoZLTea9Lq9Zf2g
27
+JLbJbOUpEOe5W4M8xLItlg==
28
+-----END PRIVATE KEY-----

+ 0 - 103
wazuh-docker/.github/.goss.yaml

@@ -1,103 +0,0 @@
1
-file:
2
-  /etc/filebeat/filebeat.yml:
3
-    exists: true
4
-    mode: "0644"
5
-    owner: root
6
-    group: root
7
-    filetype: file
8
-    contains: []
9
-  /var/ossec/bin/wazuh-control:
10
-    exists: true
11
-    mode: "0750"
12
-    owner: root
13
-    group: root
14
-    filetype: file
15
-    contains: []
16
-  /var/ossec/etc/lists/audit-keys:
17
-    exists: true
18
-    mode: "0660"
19
-    owner: wazuh
20
-    group: wazuh
21
-    filetype: file
22
-    contains: []
23
-  /var/ossec/etc/ossec.conf:
24
-    exists: true
25
-    mode: "0660"
26
-    owner: root
27
-    group: wazuh
28
-    filetype: file
29
-    contains: []
30
-  /var/ossec/etc/rules/local_rules.xml:
31
-    exists: true
32
-    mode: "0660"
33
-    owner: wazuh
34
-    group: wazuh
35
-    filetype: file
36
-    contains: []
37
-  /var/ossec/etc/sslmanager.cert:
38
-    exists: true
39
-    mode: "0640"
40
-    owner: root
41
-    group: root
42
-    filetype: file
43
-    contains: []
44
-  /var/ossec/etc/sslmanager.key:
45
-    exists: true
46
-    mode: "0640"
47
-    owner: root
48
-    group: root
49
-    filetype: file
50
-    contains: []
51
-package:
52
-  filebeat:
53
-    installed: true
54
-    versions:
55
-    - 7.10.2
56
-  wazuh-manager:
57
-    installed: true
58
-    versions:
59
-    - 4.14.3
60
-port:
61
-  tcp:1514:
62
-    listening: true
63
-    ip:
64
-    - 0.0.0.0
65
-  tcp:1515:
66
-    listening: true
67
-    ip:
68
-    - 0.0.0.0
69
-  tcp:55000:
70
-    listening: true
71
-    ip:
72
-    - 0.0.0.0
73
-process:
74
-  filebeat:
75
-    running: true
76
-  wazuh-analysisd:
77
-    running: true
78
-  wazuh-authd:
79
-    running: true
80
-  wazuh-execd:
81
-    running: true
82
-  wazuh-monitord:
83
-    running: true
84
-  wazuh-remoted:
85
-    running: true
86
-  wazuh-syscheckd:
87
-    running: true
88
-  s6-supervise:
89
-    running: true
90
-  wazuh-db:
91
-    running: true
92
-  wazuh-modulesd:
93
-    running: true
94
-user:
95
-  wazuh:
96
-    exists: true
97
-    groups:
98
-    - wazuh
99
-    home: /var/ossec
100
-    shell: /sbin/nologin
101
-group:
102
-  wazuh:
103
-    exists: true

+ 0 - 245
wazuh-docker/.github/free-disk-space/action.yml

@@ -1,245 +0,0 @@
1
-name: "Free Disk Space (Ubuntu)"
2
-description: "A configurable GitHub Action to free up disk space on an Ubuntu GitHub Actions runner."
3
-
4
-# Thanks @jlumbroso for the action code https://github.com/jlumbroso/free-disk-space/
5
-# See: https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#branding
6
-
7
-inputs:
8
-  android:
9
-    description: "Remove Android runtime"
10
-    required: false
11
-    default: "true"
12
-  dotnet:
13
-    description: "Remove .NET runtime"
14
-    required: false
15
-    default: "true"
16
-  haskell:
17
-    description: "Remove Haskell runtime"
18
-    required: false
19
-    default: "true"
20
-
21
-  # option inspired by:
22
-  # https://github.com/apache/flink/blob/master/tools/azure-pipelines/free_disk_space.sh
23
-  large-packages:
24
-    description: "Remove large packages"
25
-    required: false
26
-    default: "true"
27
-
28
-  docker-images:
29
-    description: "Remove Docker images"
30
-    required: false
31
-    default: "true"
32
-
33
-  # option inspired by:
34
-  # https://github.com/actions/virtual-environments/issues/2875#issuecomment-1163392159
35
-  tool-cache:
36
-    description: "Remove image tool cache"
37
-    required: false
38
-    default: "false"
39
-
40
-  swap-storage:
41
-    description: "Remove swap storage"
42
-    required: false
43
-    default: "true"
44
-
45
-runs:
46
-  using: "composite"
47
-  steps:
48
-    - shell: bash
49
-      run: |
50
-
51
-        # ======
52
-        # MACROS
53
-        # ======
54
-
55
-        # macro to print a line of equals
56
-        # (silly but works)
57
-        printSeparationLine() {
58
-          str=${1:=}
59
-          num=${2:-80}
60
-          counter=1
61
-          output=""
62
-          while [ $counter -le $num ]
63
-          do
64
-             output="${output}${str}"
65
-             counter=$((counter+1))
66
-          done
67
-          echo "${output}"
68
-        }
69
-
70
-        # macro to compute available space
71
-        # REF: https://unix.stackexchange.com/a/42049/60849
72
-        # REF: https://stackoverflow.com/a/450821/408734
73
-        getAvailableSpace() { echo $(df -a $1 | awk 'NR > 1 {avail+=$4} END {print avail}'); }
74
-
75
-        # macro to make Kb human readable (assume the input is Kb)
76
-        # REF: https://unix.stackexchange.com/a/44087/60849
77
-        formatByteCount() { echo $(numfmt --to=iec-i --suffix=B --padding=7 $1'000'); }
78
-
79
-        # macro to output saved space
80
-        printSavedSpace() {
81
-          saved=${1}
82
-          title=${2:-}
83
-
84
-          echo ""
85
-          printSeparationLine '*' 80
86
-          if [ ! -z "${title}" ]; then
87
-            echo "=> ${title}: Saved $(formatByteCount $saved)"
88
-          else
89
-            echo "=> Saved $(formatByteCount $saved)"
90
-          fi
91
-          printSeparationLine '*' 80
92
-          echo ""
93
-        }
94
-
95
-        # macro to print output of dh with caption
96
-        printDH() {
97
-          caption=${1:-}
98
-
99
-          printSeparationLine '=' 80
100
-          echo "${caption}"
101
-          echo ""
102
-          echo "$ dh -h /"
103
-          echo ""
104
-          df -h /
105
-          echo "$ dh -a /"
106
-          echo ""
107
-          df -a /
108
-          echo "$ dh -a"
109
-          echo ""
110
-          df -a
111
-          printSeparationLine '=' 80
112
-        }
113
-
114
-
115
-
116
-        # ======
117
-        # SCRIPT
118
-        # ======
119
-
120
-        # Display initial disk space stats
121
-
122
-        AVAILABLE_INITIAL=$(getAvailableSpace)
123
-        AVAILABLE_ROOT_INITIAL=$(getAvailableSpace '/')
124
-
125
-        printDH "BEFORE CLEAN-UP:"
126
-        echo ""
127
-
128
-
129
-        # Option: Remove Android library
130
-
131
-        if [[ ${{ inputs.android }} == 'true' ]]; then
132
-          BEFORE=$(getAvailableSpace)
133
-          
134
-          sudo rm -rf /usr/local/lib/android || true
135
-
136
-          AFTER=$(getAvailableSpace)
137
-          SAVED=$((AFTER-BEFORE))
138
-          printSavedSpace $SAVED "Android library"
139
-        fi
140
-
141
-        # Option: Remove .NET runtime
142
-
143
-        if [[ ${{ inputs.dotnet }} == 'true' ]]; then
144
-          BEFORE=$(getAvailableSpace)
145
-
146
-          # https://github.community/t/bigger-github-hosted-runners-disk-space/17267/11
147
-          sudo rm -rf /usr/share/dotnet || true
148
-          
149
-          AFTER=$(getAvailableSpace)
150
-          SAVED=$((AFTER-BEFORE))
151
-          printSavedSpace $SAVED ".NET runtime"
152
-        fi
153
-
154
-        # Option: Remove Haskell runtime
155
-
156
-        if [[ ${{ inputs.haskell }} == 'true' ]]; then
157
-          BEFORE=$(getAvailableSpace)
158
-
159
-          sudo rm -rf /opt/ghc || true
160
-          sudo rm -rf /usr/local/.ghcup || true
161
-          
162
-          AFTER=$(getAvailableSpace)
163
-          SAVED=$((AFTER-BEFORE))
164
-          printSavedSpace $SAVED "Haskell runtime"
165
-        fi
166
-
167
-        # Option: Remove large packages
168
-        # REF: https://github.com/apache/flink/blob/master/tools/azure-pipelines/free_disk_space.sh
169
-
170
-        if [[ ${{ inputs.large-packages }} == 'true' ]]; then
171
-          BEFORE=$(getAvailableSpace)
172
-          
173
-          sudo apt-get remove -y '^aspnetcore-.*' || echo "::warning::The command [sudo apt-get remove -y '^aspnetcore-.*'] failed to complete successfully. Proceeding..."
174
-          sudo apt-get remove -y '^dotnet-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^dotnet-.*' --fix-missing] failed to complete successfully. Proceeding..."
175
-          sudo apt-get remove -y '^llvm-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^llvm-.*' --fix-missing] failed to complete successfully. Proceeding..."
176
-          sudo apt-get remove -y 'php.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y 'php.*' --fix-missing] failed to complete successfully. Proceeding..."
177
-          sudo apt-get remove -y '^mongodb-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^mongodb-.*' --fix-missing] failed to complete successfully. Proceeding..."
178
-          sudo apt-get remove -y '^mysql-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^mysql-.*' --fix-missing] failed to complete successfully. Proceeding..."
179
-          sudo apt-get remove -y azure-cli google-chrome-stable firefox powershell mono-devel libgl1-mesa-dri --fix-missing || echo "::warning::The command [sudo apt-get remove -y azure-cli google-chrome-stable firefox powershell mono-devel libgl1-mesa-dri --fix-missing] failed to complete successfully. Proceeding..."
180
-          sudo apt-get remove -y google-cloud-sdk --fix-missing || echo "::debug::The command [sudo apt-get remove -y google-cloud-sdk --fix-missing] failed to complete successfully. Proceeding..."
181
-          sudo apt-get remove -y google-cloud-cli --fix-missing || echo "::debug::The command [sudo apt-get remove -y google-cloud-cli --fix-missing] failed to complete successfully. Proceeding..."
182
-          sudo apt-get autoremove -y || echo "::warning::The command [sudo apt-get autoremove -y] failed to complete successfully. Proceeding..."
183
-          sudo apt-get clean || echo "::warning::The command [sudo apt-get clean] failed to complete successfully. Proceeding..."
184
-
185
-          AFTER=$(getAvailableSpace)
186
-          SAVED=$((AFTER-BEFORE))
187
-          printSavedSpace $SAVED "Large misc. packages"
188
-        fi
189
-
190
-        # Option: Remove Docker images
191
-
192
-        if [[ ${{ inputs.docker-images }} == 'true' ]]; then
193
-          BEFORE=$(getAvailableSpace)
194
-
195
-          sudo docker image prune --all --force || true
196
-
197
-          AFTER=$(getAvailableSpace)
198
-          SAVED=$((AFTER-BEFORE))
199
-          printSavedSpace $SAVED "Docker images"
200
-        fi
201
-
202
-        # Option: Remove tool cache
203
-        # REF: https://github.com/actions/virtual-environments/issues/2875#issuecomment-1163392159
204
-
205
-        if [[ ${{ inputs.tool-cache }} == 'true' ]]; then
206
-          BEFORE=$(getAvailableSpace)
207
-
208
-          sudo rm -rf "$AGENT_TOOLSDIRECTORY" || true
209
-          
210
-          AFTER=$(getAvailableSpace)
211
-          SAVED=$((AFTER-BEFORE))
212
-          printSavedSpace $SAVED "Tool cache"
213
-        fi
214
-
215
-        # Option: Remove Swap storage
216
-
217
-        if [[ ${{ inputs.swap-storage }} == 'true' ]]; then
218
-          BEFORE=$(getAvailableSpace)
219
-
220
-          sudo swapoff -a || true
221
-          sudo rm -f /mnt/swapfile || true
222
-          free -h
223
-          
224
-          AFTER=$(getAvailableSpace)
225
-          SAVED=$((AFTER-BEFORE))
226
-          printSavedSpace $SAVED "Swap storage"
227
-        fi
228
-
229
-
230
-
231
-        # Output saved space statistic
232
-
233
-        AVAILABLE_END=$(getAvailableSpace)
234
-        AVAILABLE_ROOT_END=$(getAvailableSpace '/')
235
-
236
-        echo ""
237
-        printDH "AFTER CLEAN-UP:"
238
-
239
-        echo ""
240
-        echo ""
241
-
242
-        echo "/dev/root:"
243
-        printSavedSpace $((AVAILABLE_ROOT_END - AVAILABLE_ROOT_INITIAL))
244
-        echo "overall:"
245
-        printSavedSpace $((AVAILABLE_END - AVAILABLE_INITIAL))

+ 0 - 39
wazuh-docker/.github/multi-node-filebeat-check.sh

@@ -1,39 +0,0 @@
1
-COMMAND_TO_EXECUTE="filebeat test output"
2
-
3
-MASTER_CONTAINERS=$(docker ps --format '{{.Names}}' | grep -E 'master')
4
-
5
-if [ -z "$MASTER_CONTAINERS" ]; then
6
-  echo "No containers were found with 'master' in their name."
7
-else
8
-  for MASTER_CONTAINERS in $MASTER_CONTAINERS; do
9
-    FILEBEAT_OUTPUT=$(docker exec "$MASTER_CONTAINERS" $COMMAND_TO_EXECUTE)
10
-    FILEBEAT_STATUS=$(echo "${FILEBEAT_OUTPUT}" | grep -c OK)
11
-    if [[ $FILEBEAT_STATUS -eq 7 ]]; then
12
-      echo "No errors in filebeat"
13
-      echo "${FILEBEAT_OUTPUT}"
14
-    else
15
-      echo "Errors in filebeat"
16
-      echo "${FILEBEAT_OUTPUT}"
17
-      exit 1
18
-    fi
19
-  done
20
-fi
21
-
22
-MASTER_CONTAINERS=$(docker ps --format '{{.Names}}' | grep -E 'worker')
23
-
24
-if [ -z "$MASTER_CONTAINERS" ]; then
25
-  echo "No containers were found with 'worker' in their name."
26
-else
27
-  for MASTER_CONTAINERS in $MASTER_CONTAINERS; do
28
-    FILEBEAT_OUTPUT=$(docker exec "$MASTER_CONTAINERS" $COMMAND_TO_EXECUTE)
29
-    FILEBEAT_STATUS=$(echo "${FILEBEAT_OUTPUT}" | grep -c OK)
30
-    if [[ $FILEBEAT_STATUS -eq 7 ]]; then
31
-      echo "No errors in filebeat"
32
-      echo "${FILEBEAT_OUTPUT}"
33
-    else
34
-      echo "Errors in filebeat"
35
-      echo "${FILEBEAT_OUTPUT}"
36
-      exit 1
37
-    fi
38
-  done
39
-fi

+ 0 - 16
wazuh-docker/.github/multi-node-log-check.sh

@@ -1,16 +0,0 @@
1
-log1=$(docker exec multi-node_wazuh.master_1 sh -c 'cat /var/ossec/logs/ossec.log' | grep -P "ERR|WARN|CRIT")
2
-if [[ -z "$log1" ]]; then
3
-  echo "No errors in master ossec.log"
4
-else
5
-  echo "Errors in master ossec.log:"
6
-  echo "${log1}"
7
-  exit 1
8
-fi
9
-log2=$(docker exec multi-node_wazuh.worker_1 sh -c 'cat /var/ossec/logs/ossec.log' | grep -P "ERR|WARN|CRIT")
10
-if [[ -z "${log2}" ]]; then
11
-  echo "No errors in worker ossec.log"
12
-else
13
-  echo "Errors in worker ossec.log:"
14
-  echo "${log2}"
15
-  exit 1
16
-fi

+ 0 - 20
wazuh-docker/.github/single-node-filebeat-check.sh

@@ -1,20 +0,0 @@
1
-COMMAND_TO_EXECUTE="filebeat test output"
2
-
3
-MASTER_CONTAINERS=$(docker ps --format '{{.Names}}' | grep -E 'manager')
4
-
5
-if [ -z "$MASTER_CONTAINERS" ]; then
6
-  echo "No containers were found with 'manager' in their name."
7
-else
8
-  for MASTER_CONTAINERS in $MASTER_CONTAINERS; do
9
-    FILEBEAT_OUTPUT=$(docker exec "$MASTER_CONTAINERS" $COMMAND_TO_EXECUTE)
10
-    FILEBEAT_STATUS=$(echo "${FILEBEAT_OUTPUT}" | grep -c OK)
11
-    if [[ $FILEBEAT_STATUS -eq 7 ]]; then
12
-      echo "No errors in filebeat"
13
-      echo "${FILEBEAT_OUTPUT}"
14
-    else
15
-      echo "Errors in filebeat"
16
-      echo "${FILEBEAT_OUTPUT}"
17
-      exit 1
18
-    fi
19
-  done
20
-fi

+ 0 - 8
wazuh-docker/.github/single-node-log-check.sh

@@ -1,8 +0,0 @@
1
-log=$(docker exec single-node_wazuh.manager_1 sh -c 'cat /var/ossec/logs/ossec.log' | grep -P "ERR|WARN|CRIT")
2
-if [[ -z "$log" ]]; then
3
-  echo "No errors in ossec.log"
4
-else
5
-  echo "Errors in ossec.log:"
6
-  echo "${log}"
7
-  exit 1
8
-fi

+ 0 - 142
wazuh-docker/.github/workflows/4_bumper_repository.yml

@@ -1,142 +0,0 @@
1
-name: Repository bumper
2
-run-name: Bump ${{ github.ref_name }} (${{ inputs.id }})
3
-
4
-on:
5
-  workflow_dispatch:
6
-    inputs:
7
-      version:
8
-        description: 'Target version (e.g. 1.2.3)'
9
-        default: ''
10
-        required: false
11
-        type: string
12
-      stage:
13
-        description: 'Version stage (e.g. alpha0)'
14
-        default: ''
15
-        required: false
16
-        type: string
17
-      tag:
18
-        description: 'Change branches references to tag-like references (e.g. v4.12.0-alpha7)'
19
-        default: false
20
-        required: false
21
-        type: boolean
22
-      issue-link:
23
-        description: 'Issue link in format https://github.com/wazuh/<REPO>/issues/<ISSUE-NUMBER>'
24
-        required: true
25
-        type: string
26
-      id:
27
-        description: 'Optional identifier for the run'
28
-        required: false
29
-        type: string
30
-
31
-jobs:
32
-  bump:
33
-    name: Repository bumper
34
-    runs-on: ubuntu-22.04
35
-    permissions:
36
-      contents: write
37
-      pull-requests: write
38
-
39
-    env:
40
-      CI_COMMIT_AUTHOR: wazuhci
41
-      CI_COMMIT_EMAIL: 22834044+wazuhci@users.noreply.github.com
42
-      CI_GPG_PRIVATE_KEY: ${{ secrets.CI_WAZUHCI_GPG_PRIVATE }}
43
-      GH_TOKEN: ${{ secrets.CI_WAZUHCI_BUMPER_TOKEN }}
44
-      BUMP_SCRIPT_PATH: tools/repository_bumper.sh
45
-      BUMP_LOG_PATH: tools
46
-
47
-    steps:
48
-      - name: Dump event payload
49
-        run: |
50
-          cat $GITHUB_EVENT_PATH | jq '.inputs'
51
-
52
-      - name: Set up GPG key
53
-        id: signing_setup
54
-        run: |
55
-          echo "${{ env.CI_GPG_PRIVATE_KEY }}" | gpg --batch --import
56
-          KEY_ID=$(gpg --list-secret-keys --with-colons | awk -F: '/^sec/ {print $5; exit}')
57
-          echo "gpg_key_id=$KEY_ID" >> $GITHUB_OUTPUT
58
-
59
-      - name: Set up git
60
-        run: |
61
-          git config --global user.name "${{ env.CI_COMMIT_AUTHOR }}"
62
-          git config --global user.email "${{ env.CI_COMMIT_EMAIL }}"
63
-          git config --global commit.gpgsign true
64
-          git config --global user.signingkey "${{ steps.signing_setup.outputs.gpg_key_id }}"
65
-          echo "use-agent" >> ~/.gnupg/gpg.conf
66
-          echo "pinentry-mode loopback" >> ~/.gnupg/gpg.conf
67
-          echo "allow-loopback-pinentry" >> ~/.gnupg/gpg-agent.conf
68
-          echo RELOADAGENT | gpg-connect-agent
69
-          export DEBIAN_FRONTEND=noninteractive
70
-          export GPG_TTY=$(tty)
71
-
72
-      - name: Checkout repository
73
-        uses: actions/checkout@v4
74
-        with:
75
-          # Using workflow-specific GITHUB_TOKEN because currently CI_WAZUHCI_BUMPER_TOKEN
76
-          # doesn't have all the necessary permissions
77
-          token: ${{ env.GH_TOKEN }}
78
-
79
-      - name: Determine branch name
80
-        id: vars
81
-        env:
82
-          VERSION: ${{ inputs.version }}
83
-          STAGE: ${{ inputs.stage }}
84
-          TAG: ${{ inputs.tag }}
85
-        run: |
86
-          script_params=""
87
-          version=${{ env.VERSION }}
88
-          stage=${{ env.STAGE }}
89
-          tag=${{ env.TAG }}
90
-
91
-          # Both version and stage provided
92
-          if [[ -n "$version" && -n "$stage" && "$tag" != "true" ]]; then
93
-            script_params="--version ${version} --stage ${stage}"
94
-          elif [[ -n "$version" && -n "$stage" && "$tag" == "true" ]]; then
95
-            script_params="--version ${version} --stage ${stage} --tag ${tag}"
96
-          fi
97
-
98
-          issue_number=$(echo "${{ inputs.issue-link }}" | awk -F'/' '{print $NF}')
99
-          BRANCH_NAME="enhancement/wqa${issue_number}-bump-${{ github.ref_name }}"
100
-          echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
101
-          echo "script_params=${script_params}" >> $GITHUB_OUTPUT
102
-
103
-      - name: Create and switch to bump branch
104
-        run: |
105
-          git checkout -b ${{ steps.vars.outputs.branch_name }}
106
-
107
-      - name: Make version bump changes
108
-        run: |
109
-          echo "Running bump script"
110
-          bash ${{ env.BUMP_SCRIPT_PATH }} ${{ steps.vars.outputs.script_params }}
111
-
112
-      - name: Commit and push changes
113
-        run: |
114
-          git add .
115
-          git commit -m "feat: bump ${{ github.ref_name }}"
116
-          git push origin ${{ steps.vars.outputs.branch_name }}
117
-
118
-      - name: Create pull request
119
-        id: create_pr
120
-        run: |
121
-          gh auth setup-git
122
-          PR_URL=$(gh pr create \
123
-            --title "Bump ${{ github.ref_name }} branch" \
124
-            --body "Issue: ${{ inputs.issue-link }}" \
125
-            --base ${{ github.ref_name }} \
126
-            --head ${{ steps.vars.outputs.branch_name }})
127
-
128
-          echo "Pull request created: ${PR_URL}"
129
-          echo "pull_request_url=${PR_URL}" >> $GITHUB_OUTPUT
130
-
131
-      - name: Merge pull request
132
-        run: |
133
-          # Any checks for the PR are bypassed since the branch is expected to be functional (i.e. the bump process does not introduce any bugs)
134
-          gh pr merge "${{ steps.create_pr.outputs.pull_request_url }}" --merge --admin
135
-
136
-      - name: Show logs
137
-        run: |
138
-          echo "Bump complete."
139
-          echo "Branch: ${{ steps.vars.outputs.branch_name }}"
140
-          echo "PR: ${{ steps.create_pr.outputs.pull_request_url }}"
141
-          echo "Bumper scripts logs:"
142
-          cat ${BUMP_LOG_PATH}/repository_bumper*log

+ 0 - 243
wazuh-docker/.github/workflows/Procedure_push_docker_images.yml

@@ -1,243 +0,0 @@
1
-run-name: Launch Push Docker Images - ${{ inputs.id }}
2
-name: Push Docker Images
3
-
4
-on:
5
-  workflow_dispatch:
6
-    inputs:
7
-      image_tag:
8
-        description: 'Docker image tag'
9
-        default: '4.14.3'
10
-        required: true
11
-      docker_reference:
12
-        description: 'wazuh-docker reference'
13
-        required: true
14
-      filebeat_module_version:
15
-        description: 'Filebeat module version'
16
-        default: '0.5'
17
-        required: true
18
-      revision:
19
-        description: 'Package revision'
20
-        default: '1'
21
-        required: true
22
-      id:
23
-        description: "ID used to identify the workflow uniquely."
24
-        type: string
25
-        required: false
26
-      dev:
27
-        description: "Add tag suffix '-dev' to the image tag ?"
28
-        type: boolean
29
-        default: true
30
-        required: false
31
-  workflow_call:
32
-    inputs:
33
-      image_tag:
34
-        description: 'Docker image tag'
35
-        default: '4.14.3'
36
-        required: true
37
-        type: string
38
-      docker_reference:
39
-        description: 'wazuh-docker reference'
40
-        required: false
41
-        type: string
42
-      filebeat_module_version:
43
-        description: 'Filebeat module version'
44
-        default: '0.5'
45
-        required: true
46
-        type: string
47
-      revision:
48
-        description: 'Package revision'
49
-        default: '1'
50
-        required: true
51
-        type: string
52
-      id:
53
-        description: "ID used to identify the workflow uniquely."
54
-        type: string
55
-        required: false
56
-      dev:
57
-        description: "Add tag suffix '-dev' to the image tag ?"
58
-        type: boolean
59
-        default: false
60
-        required: false
61
-
62
-jobs:
63
-  build-and-push:
64
-    runs-on: ubuntu-22.04
65
-
66
-    permissions:
67
-      id-token: write
68
-      contents: read
69
-
70
-    env:
71
-      IMAGE_REGISTRY: ${{ inputs.dev && vars.IMAGE_REGISTRY_DEV || vars.IMAGE_REGISTRY_PROD }}
72
-      IMAGE_TAG: ${{ inputs.image_tag }}
73
-      FILEBEAT_MODULE_VERSION: ${{ inputs.filebeat_module_version }}
74
-      REVISION: ${{ inputs.revision }}
75
-
76
-    steps:
77
-    - name: Print inputs
78
-      run: |
79
-        echo "---------------------------------------------"
80
-        echo "Running Procedure_push_docker_images workflow"
81
-        echo "---------------------------------------------"
82
-        echo "* BRANCH: ${{ github.ref }}"
83
-        echo "* COMMIT: ${{ github.sha }}"
84
-        echo "---------------------------------------------"
85
-        echo "Inputs provided:"
86
-        echo "---------------------------------------------"
87
-        echo "* id: ${{ inputs.id }}"
88
-        echo "* image_tag: ${{ inputs.image_tag }}"
89
-        echo "* docker_reference: ${{ inputs.docker_reference }}"
90
-        echo "* filebeat_module_version: ${{ inputs.filebeat_module_version }}"
91
-        echo "* revision: ${{ inputs.revision }}"
92
-        echo "* dev: ${{ inputs.dev }}"
93
-        echo "---------------------------------------------"
94
-
95
-    - name: Checkout repository
96
-      uses: actions/checkout@v4
97
-      with:
98
-        ref: ${{ inputs.docker_reference }}
99
-
100
-    - name: free disk space
101
-      uses: ./.github/free-disk-space
102
-
103
-    - name: Set up QEMU
104
-      uses: docker/setup-qemu-action@v3
105
-
106
-    - name: Set up Docker Buildx
107
-      uses: docker/setup-buildx-action@v3
108
-
109
-    - name: Configure aws credentials
110
-      if: ${{ inputs.dev == true }}
111
-      uses: aws-actions/configure-aws-credentials@v4
112
-      with:
113
-        role-to-assume: ${{ secrets.AWS_IAM_DOCKER_ROLE }}
114
-        aws-region: "${{ secrets.AWS_REGION }}"
115
-
116
-    - name: Log in to Amazon ECR
117
-      if: ${{ inputs.dev == true }}
118
-      uses: aws-actions/amazon-ecr-login@v2
119
-
120
-    - name: Log in to Docker Hub
121
-      if: ${{ inputs.dev == false }}
122
-      uses: docker/login-action@v3
123
-      with:
124
-        username: ${{ secrets.DOCKERHUB_USERNAME }}
125
-        password: ${{ secrets.DOCKERHUB_PASSWORD }}
126
-
127
-    - name: Build Wazuh images
128
-      run: |
129
-        IMAGE_TAG="${{ inputs.image_tag }}"
130
-        FILEBEAT_MODULE_VERSION=${{ inputs.filebeat_module_version }}
131
-        REVISION=${{ inputs.revision }}
132
-
133
-        if [[ "$IMAGE_TAG" == *"-"* ]]; then
134
-          IFS='-' read -r -a tokens <<< "$IMAGE_TAG"
135
-          if [ -z "${tokens[1]}" ]; then
136
-            echo "Invalid image tag: $IMAGE_TAG"
137
-            exit 1
138
-          fi
139
-          DEV_STAGE=${tokens[1]}
140
-          WAZUH_VER=${tokens[0]}
141
-          ./build-images.sh -v $WAZUH_VER -r $REVISION -d $DEV_STAGE -f $FILEBEAT_MODULE_VERSION -rg $IMAGE_REGISTRY -m
142
-        else
143
-          ./build-images.sh -v $IMAGE_TAG -r $REVISION -f $FILEBEAT_MODULE_VERSION -rg $IMAGE_REGISTRY -m
144
-        fi
145
-
146
-        # Save .env file (generated by build-images.sh) contents to $GITHUB_ENV
147
-        ENV_FILE_PATH="../.env"
148
-
149
-        if [ -f $ENV_FILE_PATH ]; then
150
-          while IFS= read -r line || [ -n "$line" ]; do
151
-            echo "$line" >> $GITHUB_ENV
152
-          done < $ENV_FILE_PATH
153
-        else
154
-          echo "The environment file $ENV_FILE_PATH does not exist!"
155
-          exit 1
156
-        fi
157
-      working-directory: ./build-docker-images
158
-
159
-    - name: Image exists validation
160
-      if: ${{ inputs.dev == false }}
161
-      id: validation
162
-      run: |
163
-        IMAGE_TAG=${{ inputs.image_tag }}
164
-        PURPOSE=""
165
-
166
-        if [[ "$IMAGE_TAG" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
167
-          if docker manifest inspect $IMAGE_REGISTRY/wazuh/wazuh-manager:$IMAGE_TAG > /dev/null 2>&1; then
168
-            PURPOSE="regeneration"
169
-            echo "Image wazuh/wazuh-manager:$IMAGE_TAG exists. Setting PURPOSE to 'regeneration'"
170
-          else
171
-            PURPOSE="new release"
172
-            echo "Image wazuh/wazuh-manager:$IMAGE_TAG does NOT exist. Setting PURPOSE to 'new release'"
173
-          fi
174
-          echo "✅ Release tag: '$IMAGE_TAG'"
175
-        elif [[ "$IMAGE_TAG" =~ ^[0-9]+\.[0-9]+\.[0-9]+-(alpha|beta|rc)[0-9]+$ ]]; then
176
-          PURPOSE="new stage"
177
-          echo "✅ Stage tag: '$IMAGE_TAG'. Setting PURPOSE to 'new stage'"
178
-        else
179
-          echo "❌ No release or stage tag ('$IMAGE_TAG'), the GH issue will not be created"
180
-        fi
181
-
182
-        echo "purpose=$PURPOSE" >> $GITHUB_OUTPUT
183
-
184
-    - name: GH issue notification
185
-      if: ${{ inputs.dev == false && steps.validation.outputs.purpose != '' }}
186
-      run: |
187
-        IMAGE_TAG=${{ inputs.image_tag }}
188
-        GH_TITLE=""
189
-        GH_MESSAGE=""
190
-        PURPOSE="${{ steps.validation.outputs.purpose }}"
191
-
192
-        ## Setting GH issue title
193
-        GH_TITLE="Artifactory vulnerabilities update \`v$IMAGE_TAG\`"
194
-
195
-        ## Setting GH issue body
196
-        GH_MESSAGE=$(cat <<- EOF | tr -d '\r' | sed 's/^[[:space:]]*//'
197
-        ### Description
198
-        - [ ] Update the [Artifactory vulnerabilities](${{ secrets.NOTIFICATION_SHEET_URL }}) sheet with the \`v$IMAGE_TAG\` vulnerabilities.
199
-
200
-        **Purpose**: $PURPOSE
201
-        >[!NOTE]
202
-        >To update the \`Tentative Release\` column, follow these steps:
203
-        https://github.com/wazuh/${{ secrets.NOTIFICATION_REPO }}/issues/2049#issuecomment-2671590268
204
-        EOF
205
-        )
206
-
207
-        # Print the GH Variables content
208
-        echo "--- Variable Content ---"
209
-        echo "$GH_TITLE"
210
-        echo "------------------------"
211
-
212
-        echo "--- Variable Content ---"
213
-        echo "$GH_MESSAGE"
214
-        echo "------------------------"
215
-
216
-        ## GH issue creation
217
-        ISSUE_URL=$(gh issue create \
218
-          -R wazuh/${{ secrets.NOTIFICATION_REPO }} \
219
-          --title "$GH_TITLE" \
220
-          --body "$GH_MESSAGE" \
221
-          --label "level/task" \
222
-          --label "type/maintenance" \
223
-          --label "request/operational")
224
-
225
-        ## Adding the issue to the team project
226
-        PROJECT_ITEM_ID=$(gh project item-add \
227
-          ${{ secrets.NOTIFICATION_PROJECT_NUMBER }} \
228
-          --url $ISSUE_URL \
229
-          --owner wazuh \
230
-          --format json \
231
-          | jq -r '.id')
232
-
233
-          ## Setting Objective
234
-          gh project item-edit --id $PROJECT_ITEM_ID --project-id ${{ secrets.NOTIFICATION_PROJECT_ID }} --field-id ${{ secrets.NOTIFICATION_PROJECT_OBJECTIVE_ID }} --text  "Security scans"
235
-          ## Setting Priority
236
-          gh project item-edit --id $PROJECT_ITEM_ID --project-id ${{ secrets.NOTIFICATION_PROJECT_ID }} --field-id ${{ secrets.NOTIFICATION_PROJECT_PRIORITY_ID }} --single-select-option-id ${{ secrets.NOTIFICATION_PROJECT_PRIORITY_OPTION_ID }}
237
-          ## Setting Size
238
-          gh project item-edit --id $PROJECT_ITEM_ID --project-id ${{ secrets.NOTIFICATION_PROJECT_ID }} --field-id ${{ secrets.NOTIFICATION_PROJECT_SIZE_ID }} --single-select-option-id ${{ secrets.NOTIFICATION_PROJECT_SIZE_OPTION_ID }}
239
-          ## Setting Subteam
240
-          gh project item-edit --id $PROJECT_ITEM_ID --project-id ${{ secrets.NOTIFICATION_PROJECT_ID }} --field-id ${{ secrets.NOTIFICATION_PROJECT_SUBTEAM_ID }} --single-select-option-id ${{ secrets.NOTIFICATION_PROJECT_SUBTEAM_OPTION_ID }}
241
-
242
-      env:
243
-        GH_TOKEN: ${{ secrets.NOTIFICATION_GH_ARTIFACT_TOKEN }}

+ 0 - 369
wazuh-docker/.github/workflows/push.yml

@@ -1,369 +0,0 @@
1
-name: Wazuh Docker pipeline
2
-
3
-on: [pull_request]
4
-
5
-jobs:
6
-  build-docker-images:
7
-    runs-on: ubuntu-22.04
8
-    steps:
9
-
10
-    - name: Check out code
11
-      uses: actions/checkout@v4
12
-
13
-    - name: Build Wazuh images
14
-      run: ./build-images.sh
15
-      working-directory: ./build-docker-images
16
-
17
-    - name: Create enviroment variables
18
-      run: cat .env > $GITHUB_ENV
19
-
20
-    - name: Create backup Docker images
21
-      run: |
22
-        mkdir -p /home/runner/work/wazuh-docker/wazuh-docker/docker-images/
23
-        docker save wazuh/wazuh-manager:${{env.WAZUH_IMAGE_VERSION}} -o /home/runner/work/wazuh-docker/wazuh-docker/docker-images/wazuh-manager.tar
24
-        docker save wazuh/wazuh-indexer:${{env.WAZUH_IMAGE_VERSION}} -o /home/runner/work/wazuh-docker/wazuh-docker/docker-images/wazuh-indexer.tar
25
-        docker save wazuh/wazuh-dashboard:${{env.WAZUH_IMAGE_VERSION}} -o /home/runner/work/wazuh-docker/wazuh-docker/docker-images/wazuh-dashboard.tar
26
-        docker save wazuh/wazuh-agent:${{env.WAZUH_IMAGE_VERSION}} -o /home/runner/work/wazuh-docker/wazuh-docker/docker-images/wazuh-agent.tar
27
-
28
-    - name: Temporarily save Wazuh manager Docker image
29
-      uses: actions/upload-artifact@v4
30
-      with:
31
-        name: docker-artifact-manager
32
-        path: /home/runner/work/wazuh-docker/wazuh-docker/docker-images/wazuh-manager.tar
33
-        retention-days: 1
34
-
35
-    - name: Temporarily save Wazuh indexer Docker image
36
-      uses: actions/upload-artifact@v4
37
-      with:
38
-        name: docker-artifact-indexer
39
-        path: /home/runner/work/wazuh-docker/wazuh-docker/docker-images/wazuh-indexer.tar
40
-        retention-days: 1
41
-
42
-    - name: Temporarily save Wazuh dashboard Docker image
43
-      uses: actions/upload-artifact@v4
44
-      with:
45
-        name: docker-artifact-dashboard
46
-        path: /home/runner/work/wazuh-docker/wazuh-docker/docker-images/wazuh-dashboard.tar
47
-        retention-days: 1
48
-
49
-    - name: Temporarily save Wazuh agent Docker image
50
-      uses: actions/upload-artifact@v4
51
-      with:
52
-        name: docker-artifact-agent
53
-        path: /home/runner/work/wazuh-docker/wazuh-docker/docker-images/wazuh-agent.tar
54
-        retention-days: 1
55
-
56
-    - name: Install Goss
57
-      uses: e1himself/goss-installation-action@v1.0.3
58
-      with:
59
-        version: v0.3.16
60
-
61
-    - name: Execute Goss tests (wazuh-manager)
62
-      run: dgoss run wazuh/wazuh-manager:${{env.WAZUH_IMAGE_VERSION}}
63
-      env:
64
-        GOSS_SLEEP: 30
65
-        GOSS_FILE: .github/.goss.yaml
66
-
67
-  check-single-node:
68
-    runs-on: ubuntu-22.04
69
-    needs: build-docker-images
70
-    steps:
71
-
72
-    - name: Check out code
73
-      uses: actions/checkout@v4
74
-
75
-    - name: Create enviroment variables
76
-      run: cat .env > $GITHUB_ENV
77
-
78
-    - name: Retrieve saved Wazuh indexer Docker image
79
-      uses: actions/download-artifact@v4
80
-      with:
81
-        name: docker-artifact-indexer
82
-
83
-    - name: Retrieve saved Wazuh manager Docker image
84
-      uses: actions/download-artifact@v4
85
-      with:
86
-        name: docker-artifact-manager
87
-
88
-    - name: Retrieve saved Wazuh dashboard Docker image
89
-      uses: actions/download-artifact@v4
90
-      with:
91
-        name: docker-artifact-dashboard
92
-
93
-    - name: Retrieve saved Wazuh agent Docker image
94
-      uses: actions/download-artifact@v4
95
-      with:
96
-        name: docker-artifact-agent
97
-
98
-    - name: Docker load
99
-      run: |
100
-        docker load --input ./wazuh-indexer.tar
101
-        docker load --input ./wazuh-dashboard.tar
102
-        docker load --input ./wazuh-manager.tar
103
-        docker load --input ./wazuh-agent.tar
104
-
105
-    - name: Create single node certficates
106
-      run: docker compose -f single-node/generate-indexer-certs.yml run --rm generator
107
-
108
-    - name: Start single node stack
109
-      run: docker compose -f single-node/docker-compose.yml up -d
110
-
111
-    - name: Check Wazuh indexer start
112
-      run: |
113
-       sleep 60
114
-       status_green="`curl -XGET "https://0.0.0.0:9200/_cluster/health" -u admin:SecretPassword -k -s | grep green | wc -l`"
115
-       if [[ $status_green -eq 1 ]]; then
116
-        curl -XGET "https://0.0.0.0:9200/_cluster/health" -u admin:SecretPassword -k -s
117
-       else
118
-        curl -XGET "https://0.0.0.0:9200/_cluster/health" -u admin:SecretPassword -k -s
119
-        exit 1
120
-       fi
121
-       status_index="`curl -XGET "https://0.0.0.0:9200/_cat/indices" -u admin:SecretPassword -k -s | wc -l`"
122
-       status_index_green="`curl -XGET "https://0.0.0.0:9200/_cat/indices" -u admin:SecretPassword -k -s | grep "green" | wc -l`"
123
-       if [[ $status_index_green -eq $status_index ]]; then
124
-        curl -XGET "https://0.0.0.0:9200/_cat/indices" -u admin:SecretPassword -k -s
125
-       else
126
-        curl -XGET "https://0.0.0.0:9200/_cat/indices" -u admin:SecretPassword -k -s
127
-        exit 1
128
-       fi
129
-
130
-
131
-    - name: Check Wazuh indexer nodes
132
-      run: |
133
-       nodes="`curl -XGET "https://0.0.0.0:9200/_cat/nodes" -u admin:SecretPassword -k -s | grep -E "indexer" | wc -l`"
134
-       if [[ $nodes -eq 1 ]]; then
135
-        echo "Wazuh indexer nodes: ${nodes}"
136
-       else
137
-        echo "Wazuh indexer nodes: ${nodes}"
138
-        exit 1
139
-       fi
140
-
141
-    - name: Check documents into wazuh-alerts index
142
-      run: |
143
-       sleep 120
144
-       docs="`curl -XGET "https://0.0.0.0:9200/wazuh-alerts*/_count" -u admin:SecretPassword -k -s | jq -r ".count"`"
145
-       if [[ $docs -gt 0 ]]; then
146
-        echo "wazuh-alerts index documents: ${docs}"
147
-       else
148
-        echo "wazuh-alerts index documents: ${docs}"
149
-        exit 1
150
-       fi
151
-
152
-    - name: Check Wazuh templates
153
-      run: |
154
-       qty_templates="`curl -XGET "https://0.0.0.0:9200/_cat/templates" -u admin:SecretPassword -k -s | grep -P "wazuh|wazuh-agent|wazuh-statistics" | wc -l`"
155
-       templates="`curl -XGET "https://0.0.0.0:9200/_cat/templates" -u admin:SecretPassword -k -s | grep -P "wazuh|wazuh-agent|wazuh-statistics"`"
156
-       if [[ $qty_templates -gt 3 ]]; then
157
-        echo "wazuh templates:"
158
-        echo "${templates}"
159
-       else
160
-        echo "wazuh templates:"
161
-        echo "${templates}"
162
-        exit 1
163
-       fi
164
-
165
-    - name: Check Wazuh manager start
166
-      run: |
167
-        services="`curl -k -s -X GET "https://0.0.0.0:55000/manager/status?pretty=true" -H  "Authorization: Bearer ${{env.TOKEN}}" | jq -r .data.affected_items | grep running | wc -l`"
168
-        if [[ $services -gt 9 ]]; then
169
-          echo "Wazuh Manager Services: ${services}"
170
-          echo "OK"
171
-        else
172
-          echo "Wazuh indexer nodes: ${nodes}"
173
-          curl -k -X GET "https://0.0.0.0:55000/manager/status?pretty=true" -H  "Authorization: Bearer ${{env.TOKEN}}" | jq -r .data.affected_items
174
-          exit 1
175
-        fi
176
-      env:
177
-        TOKEN: $(curl -s -u wazuh-wui:MyS3cr37P450r.*- -k -X GET "https://0.0.0.0:55000/security/user/authenticate?raw=true")
178
-
179
-    - name: Check filebeat output
180
-      run: ./.github/single-node-filebeat-check.sh
181
-
182
-    - name: Check Wazuh dashboard service URL
183
-      run: |
184
-       status=$(curl -XGET --silent  https://0.0.0.0:443/app/status -k -u admin:SecretPassword -I -s | grep -E "^HTTP" | awk  '{print $2}')
185
-       if [[ $status -eq 200 ]]; then
186
-        echo "Wazuh dashboard status: ${status}"
187
-       else
188
-        echo "Wazuh dashboard status: ${status}"
189
-        exit 1
190
-       fi
191
-
192
-    - name: Modify Docker endpoint into Wazuh agent docker-compose.yml file
193
-      run: sed -i "s/<WAZUH_MANAGER_IP>/$(ip addr show docker0 | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1)/g" wazuh-agent/docker-compose.yml
194
-
195
-    - name: Start Wazuh agent
196
-      run: docker compose -f wazuh-agent/docker-compose.yml up -d
197
-
198
-    - name: Check Wazuh agent enrollment
199
-      run: |
200
-        sleep 20
201
-        curl -k -s -X GET "https://localhost:55000/agents?pretty=true" -H  "Authorization: Bearer ${{env.TOKEN}}"
202
-      env:
203
-        TOKEN: $(curl -s -u wazuh-wui:MyS3cr37P450r.*- -k -X GET "https://0.0.0.0:55000/security/user/authenticate?raw=true")
204
-
205
-    - name: Check errors in ossec.log for Wazuh manager
206
-      run: ./.github/single-node-log-check.sh
207
-
208
-  check-multi-node:
209
-    runs-on: ubuntu-22.04
210
-    needs: build-docker-images
211
-    steps:
212
-
213
-    - name: Check out code
214
-      uses: actions/checkout@v4
215
-
216
-    - name: Create enviroment variables
217
-      run: cat .env > $GITHUB_ENV
218
-
219
-    - name: free disk space
220
-      uses: ./.github/free-disk-space
221
-
222
-    - name: Retrieve saved Wazuh dashboard Docker image
223
-      uses: actions/download-artifact@v4
224
-      with:
225
-        name: docker-artifact-dashboard
226
-
227
-    - name: Retrieve saved Wazuh manager Docker image
228
-      uses: actions/download-artifact@v4
229
-      with:
230
-        name: docker-artifact-manager
231
-
232
-    - name: Retrieve saved Wazuh indexer Docker image
233
-      uses: actions/download-artifact@v4
234
-      with:
235
-        name: docker-artifact-indexer
236
-
237
-    - name: Retrieve saved Wazuh agent Docker image
238
-      uses: actions/download-artifact@v4
239
-      with:
240
-        name: docker-artifact-agent
241
-
242
-    - name: Docker load
243
-      run: |
244
-        docker load --input ./wazuh-manager.tar
245
-        docker load --input ./wazuh-indexer.tar
246
-        docker load --input ./wazuh-dashboard.tar
247
-        docker load --input ./wazuh-agent.tar
248
-        rm -rf wazuh-manager.tar wazuh-indexer.tar wazuh-dashboard.tar wazuh-agent.tar
249
-
250
-    - name: Create multi node certficates
251
-      run: docker compose -f multi-node/generate-indexer-certs.yml run --rm generator
252
-
253
-    - name: Start multi node stack
254
-      run: docker compose -f multi-node/docker-compose.yml up -d
255
-
256
-    - name: Check Wazuh indexer start
257
-      run: |
258
-       until [[ `curl -XGET "https://0.0.0.0:9200/_cluster/health" -u admin:SecretPassword -k -s | grep green | wc -l`  -eq 1 ]]
259
-       do
260
-         echo 'Waiting for Wazuh indexer start'
261
-         free -m
262
-         df -h
263
-         sleep 120
264
-       done
265
-       status_green="`curl -XGET "https://0.0.0.0:9200/_cluster/health" -u admin:SecretPassword -k -s | grep green | wc -l`"
266
-       if [[ $status_green -eq 1 ]]; then
267
-        curl -XGET "https://0.0.0.0:9200/_cluster/health" -u admin:SecretPassword -k -s
268
-       else
269
-        curl -XGET "https://0.0.0.0:9200/_cluster/health" -u admin:SecretPassword -k -s
270
-        exit 1
271
-       fi
272
-       status_index="`curl -XGET "https://0.0.0.0:9200/_cat/indices" -u admin:SecretPassword -k -s | wc -l`"
273
-       status_index_green="`curl -XGET "https://0.0.0.0:9200/_cat/indices" -u admin:SecretPassword -k -s | grep -E "green" | wc -l`"
274
-       if [[ $status_index_green -eq $status_index ]]; then
275
-        curl -XGET "https://0.0.0.0:9200/_cat/indices" -u admin:SecretPassword -k -s
276
-       else
277
-        curl -XGET "https://0.0.0.0:9200/_cat/indices" -u admin:SecretPassword -k -s
278
-        exit 1
279
-       fi
280
-
281
-    - name: Check Wazuh indexer nodes
282
-      run: |
283
-       nodes="`curl -XGET "https://0.0.0.0:9200/_cat/nodes" -u admin:SecretPassword -k -s | grep -E "indexer" | wc -l`"
284
-       if [[ $nodes -eq 3 ]]; then
285
-        echo "Wazuh indexer nodes: ${nodes}"
286
-       else
287
-        echo "Wazuh indexer nodes: ${nodes}"
288
-        exit 1
289
-       fi
290
-
291
-    - name: Check documents into wazuh-alerts index
292
-      run: |
293
-       until [[ $(``curl -XGET "https://0.0.0.0:9200/wazuh-alerts*/_count" -u admin:SecretPassword -k -s | jq -r ".count"``)  -gt 0 ]]
294
-       do
295
-         echo 'Waiting for Wazuh indexer events'
296
-         free -m
297
-         df -h
298
-         sleep 10
299
-       done
300
-       docs="`curl -XGET "https://0.0.0.0:9200/wazuh-alerts*/_count" -u admin:SecretPassword -k -s | jq -r ".count"`"
301
-       if [[ $docs -gt 0 ]]; then
302
-        echo "wazuh-alerts index documents: ${docs}"
303
-       else
304
-        echo "wazuh-alerts index documents: ${docs}"
305
-        exit 1
306
-       fi
307
-
308
-    - name: Check Wazuh templates
309
-      run: |
310
-       qty_templates="`curl -XGET "https://0.0.0.0:9200/_cat/templates" -u admin:SecretPassword -k -s | grep "wazuh" | wc -l`"
311
-       templates="`curl -XGET "https://0.0.0.0:9200/_cat/templates" -u admin:SecretPassword -k -s | grep "wazuh"`"
312
-       if [[ $qty_templates -gt 3 ]]; then
313
-        echo "wazuh templates:"
314
-        echo "${templates}"
315
-       else
316
-        echo "wazuh templates:"
317
-        echo "${templates}"
318
-        exit 1
319
-       fi
320
-
321
-    - name: Check Wazuh manager start
322
-      run: |
323
-        services="`curl -k -s -X GET "https://0.0.0.0:55000/manager/status?pretty=true" -H  "Authorization: Bearer ${{env.TOKEN}}" | jq -r .data.affected_items | grep running | wc -l`"
324
-        if [[ $services -gt 10 ]]; then
325
-          echo "Wazuh Manager Services: ${services}"
326
-          echo "OK"
327
-        else
328
-          echo "Wazuh indexer nodes: ${nodes}"
329
-          curl -k -s -X GET "https://0.0.0.0:55000/manager/status?pretty=true" -H  "Authorization: Bearer ${{env.TOKEN}}" | jq -r .data.affected_items
330
-          exit 1
331
-        fi
332
-        nodes=$(curl -k -s -X GET "https://0.0.0.0:55000/cluster/nodes" -H "Authorization: Bearer ${{env.TOKEN}}" | jq -r ".data.affected_items[].name" | wc -l)
333
-        if [[ $nodes -eq 2 ]]; then
334
-         echo "Wazuh manager nodes: ${nodes}"
335
-        else
336
-         echo "Wazuh manager nodes: ${nodes}"
337
-         exit 1
338
-        fi
339
-      env:
340
-        TOKEN: $(curl -s -u wazuh-wui:MyS3cr37P450r.*- -k -X GET "https://0.0.0.0:55000/security/user/authenticate?raw=true")
341
-
342
-    - name: Check filebeat output
343
-      run: ./.github/multi-node-filebeat-check.sh
344
-
345
-    - name: Check Wazuh dashboard service URL
346
-      run: |
347
-       status=$(curl -XGET --silent  https://0.0.0.0:443/app/status -k -u admin:SecretPassword -I | grep -E "^HTTP" | awk  '{print $2}')
348
-       if [[ $status -eq 200 ]]; then
349
-        echo "Wazuh dashboard status: ${status}"
350
-       else
351
-        echo "Wazuh dashboard status: ${status}"
352
-        exit 1
353
-       fi
354
-
355
-    - name: Modify Docker endpoint into Wazuh agent docker-compose.yml file
356
-      run: sed -i "s/<WAZUH_MANAGER_IP>/$(ip addr show docker0 | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1)/g" wazuh-agent/docker-compose.yml
357
-
358
-    - name: Start Wazuh agent
359
-      run: docker compose -f wazuh-agent/docker-compose.yml up -d
360
-
361
-    - name: Check Wazuh agent enrollment
362
-      run: |
363
-        sleep 20
364
-        curl -k -s -X GET "https://localhost:55000/agents?pretty=true" -H  "Authorization: Bearer ${{env.TOKEN}}"
365
-      env:
366
-        TOKEN: $(curl -s -u wazuh-wui:MyS3cr37P450r.*- -k -X GET "https://0.0.0.0:55000/security/user/authenticate?raw=true")
367
-
368
-    - name: Check errors in ossec.log for Wazuh manager
369
-      run: ./.github/multi-node-log-check.sh

+ 0 - 76
wazuh-docker/.github/workflows/trivy-dashboard.yml

@@ -1,76 +0,0 @@
1
-# This workflow uses actions that are not certified by GitHub.
2
-# They are provided by a third-party and are governed by
3
-# separate terms of service, privacy policy, and support
4
-# documentation.
5
-
6
-name: Trivy scan Wazuh dashboard
7
-
8
-on:
9
-  release:
10
-    types:
11
-    - published
12
-  pull_request:
13
-    branches:
14
-      - main
15
-  schedule:
16
-    - cron: '34 2 * * 1'
17
-  workflow_dispatch:
18
-
19
-permissions:
20
-  contents: read
21
-
22
-jobs:
23
-  build:
24
-    permissions:
25
-      contents: read # for actions/checkout to fetch code
26
-      security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
27
-
28
-    name: Build images and upload Trivy results
29
-    runs-on: "ubuntu-22.04"
30
-    steps:
31
-      - name: Checkout code
32
-        uses: actions/checkout@v3
33
-
34
-      - name: Installing dependencies
35
-        run: |
36
-          sudo apt-get update
37
-          sudo apt-get install -y jq
38
-
39
-      - name: Checkout latest tag
40
-        run: |
41
-          latest=$(curl -s "https://api.github.com/repos/wazuh/wazuh-docker/releases/latest" | jq -r '.tag_name')
42
-          git fetch origin
43
-          git checkout $latest
44
-
45
-      - name: Build Wazuh images
46
-        run: build-docker-images/build-images.sh
47
-
48
-      - name: Create enviroment variables
49
-        run: |
50
-          cat .env > $GITHUB_ENV
51
-          echo "GITHUB_REF_NAME="${GITHUB_REF_NAME%/*} >> $GITHUB_ENV
52
-
53
-      - name: Run Trivy vulnerability scanner for Wazuh dashboard
54
-        uses: aquasecurity/trivy-action@2a2157eb22c08c9a1fac99263430307b8d1bc7a2
55
-        with:
56
-          image-ref: 'wazuh/wazuh-dashboard:${{env.WAZUH_IMAGE_VERSION}}'
57
-          format: 'template'
58
-          template: '@/contrib/sarif.tpl'
59
-          output: 'trivy-results-dashboard.sarif'
60
-          severity: 'LOW,MEDIUM,CRITICAL,HIGH'
61
-
62
-      - name: Upload Trivy scan results to GitHub Security tab
63
-        uses: github/codeql-action/upload-sarif@v2
64
-        with:
65
-          sarif_file: 'trivy-results-dashboard.sarif'
66
-
67
-      - name: Slack notification
68
-        uses: rtCamp/action-slack-notify@v2
69
-        env:
70
-          SLACK_CHANNEL: cicd-monitoring
71
-          SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
72
-          #SLACK_ICON: https://github.com/rtCamp.png?size=48
73
-          SLACK_MESSAGE: "Check the results: https://github.com/wazuh/wazuh-docker/security/code-scanning?query=is%3Aopen+branch%3A${{ env.GITHUB_REF_NAME }}"
74
-          SLACK_TITLE: Wazuh docker Trivy vulnerability scan finished.
75
-          SLACK_USERNAME: github_actions
76
-          SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

+ 0 - 76
wazuh-docker/.github/workflows/trivy-indexer.yml

@@ -1,76 +0,0 @@
1
-# This workflow uses actions that are not certified by GitHub.
2
-# They are provided by a third-party and are governed by
3
-# separate terms of service, privacy policy, and support
4
-# documentation.
5
-
6
-name: Trivy scan Wazuh indexer
7
-
8
-on:
9
-  release:
10
-    types:
11
-    - published
12
-  pull_request:
13
-    branches:
14
-      - main
15
-  schedule:
16
-    - cron: '34 2 * * 1'
17
-  workflow_dispatch:
18
-
19
-permissions:
20
-  contents: read
21
-
22
-jobs:
23
-  build:
24
-    permissions:
25
-      contents: read # for actions/checkout to fetch code
26
-      security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
27
-
28
-    name: Build images and upload Trivy results
29
-    runs-on: "ubuntu-22.04"
30
-    steps:
31
-      - name: Checkout code
32
-        uses: actions/checkout@v3
33
-
34
-      - name: Installing dependencies
35
-        run: |
36
-          sudo apt-get update
37
-          sudo apt-get install -y jq
38
-
39
-      - name: Checkout latest tag
40
-        run: |
41
-          latest=$(curl -s "https://api.github.com/repos/wazuh/wazuh-docker/releases/latest" | jq -r '.tag_name')
42
-          git fetch origin
43
-          git checkout $latest
44
-
45
-      - name: Build Wazuh images
46
-        run: build-docker-images/build-images.sh
47
-
48
-      - name: Create enviroment variables
49
-        run: |
50
-          cat .env > $GITHUB_ENV
51
-          echo "GITHUB_REF_NAME="${GITHUB_REF_NAME%/*} >> $GITHUB_ENV
52
-
53
-      - name: Run Trivy vulnerability scanner for Wazuh indexer
54
-        uses: aquasecurity/trivy-action@2a2157eb22c08c9a1fac99263430307b8d1bc7a2
55
-        with:
56
-          image-ref: 'wazuh/wazuh-indexer:${{env.WAZUH_IMAGE_VERSION}}'
57
-          format: 'template'
58
-          template: '@/contrib/sarif.tpl'
59
-          output: 'trivy-results-indexer.sarif'
60
-          severity: 'LOW,MEDIUM,CRITICAL,HIGH'
61
-
62
-      - name: Upload Trivy scan results to GitHub Security tab
63
-        uses: github/codeql-action/upload-sarif@v2
64
-        with:
65
-          sarif_file: 'trivy-results-indexer.sarif'
66
-
67
-      - name: Slack notification
68
-        uses: rtCamp/action-slack-notify@v2
69
-        env:
70
-          SLACK_CHANNEL: cicd-monitoring
71
-          SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
72
-          #SLACK_ICON: https://github.com/rtCamp.png?size=48
73
-          SLACK_MESSAGE: "Check the results: https://github.com/wazuh/wazuh-docker/security/code-scanning?query=is%3Aopen+branch%3A${{ env.GITHUB_REF_NAME }}"
74
-          SLACK_TITLE: Wazuh docker Trivy vulnerability scan finished.
75
-          SLACK_USERNAME: github_actions
76
-          SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

+ 0 - 76
wazuh-docker/.github/workflows/trivy-manager.yml

@@ -1,76 +0,0 @@
1
-# This workflow uses actions that are not certified by GitHub.
2
-# They are provided by a third-party and are governed by
3
-# separate terms of service, privacy policy, and support
4
-# documentation.
5
-
6
-name: Trivy scan Wazuh manager
7
-
8
-on:
9
-  release:
10
-    types:
11
-    - published
12
-  pull_request:
13
-    branches:
14
-      - main
15
-  schedule:
16
-    - cron: '34 2 * * 1'
17
-  workflow_dispatch:
18
-
19
-permissions:
20
-  contents: read
21
-
22
-jobs:
23
-  build:
24
-    permissions:
25
-      contents: read # for actions/checkout to fetch code
26
-      security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
27
-
28
-    name: Build images and upload Trivy results
29
-    runs-on: "ubuntu-22.04"
30
-    steps:
31
-      - name: Checkout code
32
-        uses: actions/checkout@v3
33
-
34
-      - name: Installing dependencies
35
-        run: |
36
-          sudo apt-get update
37
-          sudo apt-get install -y jq
38
-
39
-      - name: Checkout latest tag
40
-        run: |
41
-          latest=$(curl -s "https://api.github.com/repos/wazuh/wazuh-docker/releases/latest" | jq -r '.tag_name')
42
-          git fetch origin
43
-          git checkout $latest
44
-
45
-      - name: Build Wazuh images
46
-        run: build-docker-images/build-images.sh
47
-
48
-      - name: Create enviroment variables
49
-        run: |
50
-          cat .env > $GITHUB_ENV
51
-          echo "GITHUB_REF_NAME="${GITHUB_REF_NAME%/*} >> $GITHUB_ENV
52
-
53
-      - name: Run Trivy vulnerability scanner for Wazuh manager
54
-        uses: aquasecurity/trivy-action@2a2157eb22c08c9a1fac99263430307b8d1bc7a2
55
-        with:
56
-          image-ref: 'wazuh/wazuh-manager:${{env.WAZUH_IMAGE_VERSION}}'
57
-          format: 'template'
58
-          template: '@/contrib/sarif.tpl'
59
-          output: 'trivy-results-manager.sarif'
60
-          severity: 'LOW,MEDIUM,CRITICAL,HIGH'
61
-
62
-      - name: Upload Trivy scan results to GitHub Security tab
63
-        uses: github/codeql-action/upload-sarif@v2
64
-        with:
65
-          sarif_file: 'trivy-results-manager.sarif'
66
-
67
-      - name: Slack notification
68
-        uses: rtCamp/action-slack-notify@v2
69
-        env:
70
-          SLACK_CHANNEL: cicd-monitoring
71
-          SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
72
-          #SLACK_ICON: https://github.com/rtCamp.png?size=48
73
-          SLACK_MESSAGE: "Check the results: https://github.com/wazuh/wazuh-docker/security/code-scanning?query=is%3Aopen+branch%3A${{ env.GITHUB_REF_NAME }}"
74
-          SLACK_TITLE: Wazuh docker Trivy vulnerability scan finished.
75
-          SLACK_USERNAME: github_actions
76
-          SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

+ 0 - 5
wazuh-docker/.gitignore

@@ -1,5 +0,0 @@
1
-single-node/config/wazuh_indexer_ssl_certs/*.pem
2
-single-node/config/wazuh_indexer_ssl_certs/*.key
3
-multi-node/config/wazuh_indexer_ssl_certs/*.pem
4
-multi-node/config/wazuh_indexer_ssl_certs/*.key
5
-*.log

+ 28 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/admin-key.pem

@@ -0,0 +1,28 @@
1
+-----BEGIN PRIVATE KEY-----
2
+MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCkux590+ruNmYd
3
+iWYi8x2XTXBms1gTLthRqmxukIk8eoHjoXyGWarj+TtJ88jpWQ+P9KgeiSk3kVnJ
4
+jEOt5OO7xLFM/LEWPqPk+YArmb4msUbWsO4Gh9+Xd7qlv0OFEcTDZ4kl/cli2T1f
5
+podQlPGc3poxxF+B5ucePIto0E4Of1NWP5JtJy+wYm9vOhmxWxCKnHybNblLPdA2
6
+SQSfjRin890/BJVDkN1ENoZ6D39NKssqVUtsC9g1O7ModLu/O2BzMBzheyQ+OzeT
7
+HGZ1Tb7RXsGxi/NuMpS9xlFTjb891L2zr1knYxtWM96R+AMw/GHu9NpFHYX3vD24
8
+NUByeMtTAgMBAAECggEAEOPMnwsc4dq7AplFWx0BMjOy7BzEUYcyj8EBCB8SqxxG
9
+eA/lJCNKdcBml1EDDwXeJhzoJeVbUAK7qYITqF85CFYE9cdM4uJ+TGpWfWHwkKgJ
10
+gta2OP4yayXQAdGH+ztUqNJTSg1o2hU7CTUaV6VF1pcuSR8AyeFiXgPIKXF+J1c1
11
+ldhuvOoTPy2+q/9j7dwBNDzMNqRWugOpI8FOkbEPUXB9Gz3ebaSH7VeU0lw7o/Bi
12
+frJe2vkXD16UaiztWA10OxvR6hPjYkpcBL74+31kLr9qYFQeP1/RwAY/C6mvjxZz
13
+TPhiR/R3OAOlFt9jRMvBAIv/JPU9/GlI152nGNHiYQKBgQDRz/ZVhpciXvwUCZvI
14
+vvi4vXJ8pT/GLOBTWPrrCFWeERV3Cbz+ZftDbvz5g2W6huuUDWSJITwSJbCEuO6k
15
+M0eZ8JEFDLFVT3vu1s5u7xfZx8c9fLKuuRLFjEMKJ2iO0DyB9i5A7FArNUxkVVgU
16
+swZu8j9i1SRDMdVrv7OMwnmKoQKBgQDI/pR/WXk+gw+dB9tQYodB8WUv4/UY9KIO
17
+9GnhUeRxmSSX3DURMrJwQPJgj/yc5fGODLwf3feuHhxhoTWvbOj2vXFkwU8FC0rd
18
+GsABOL4vGEZhwajOA2TqHZtQs070i+udgL/3SDVaXaNCSgRfLdvSU1zrWj44cVBI
19
++U6Q8W5lcwKBgHn/PVHvp5OBvXt9NsscWA07gwV9JL77uxhbpdLiDr6RWnTUAcO+
20
+0sIcGBaRU6aI6xQ0UV/3JjG7Ho+d5I0vkBOvsPNJtRdQ11RCLNiOR8UHCA/1oQQ9
21
+cu/RJe4SihZ4eKZs2epAPkFRhXDVuxiWHEiIrVivbJ1xrZIwbpuLPRbhAoGAK4At
22
+04Ih442qC2pv5O3uKC9+nubPXR9VE7eCUunOb2edq+BU++vlAraLvqprGeoKZZwL
23
++zmnKWAK9HZXkCgaI4zMxemwmH7hLQllFN6bCsZONUocprnFVYYi30xvgi3mSKhc
24
+48AVDAHIG8i5OYBLWzH/olBdtwmPPrv2bRhTtFECgYByKEHQHCIko7c6iI/ZFT1F
25
+e3HeOmVznjI8cO7YSWIYGXnydvEOMTHpEf5xlYsm9/4BvCA453TzRZR70CmhiTXs
26
+DdrhvMEQU0yJRMYfEMzX+aCD0ebb8BqhhBL1VfVw233n09z34MkLErBgEI+zqsTw
27
+IndfOU0qstU7dEtO/mbfsg==
28
+-----END PRIVATE KEY-----

+ 20 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/admin.pem

@@ -0,0 +1,20 @@
1
+-----BEGIN CERTIFICATE-----
2
+MIIDVzCCAj+gAwIBAgIULFbK3UZ6IzUQxV7OmO6HixZeHEYwDQYJKoZIhvcNAQEL
3
+BQAwNTEOMAwGA1UECwwFV2F6dWgxDjAMBgNVBAoMBVdhenVoMRMwEQYDVQQHDApD
4
+YWxpZm9ybmlhMB4XDTI2MDIxMjEwNTgwM1oXDTM2MDIxMDEwNTgwM1owUjELMAkG
5
+A1UEBhMCVVMxEzARBgNVBAcMCkNhbGlmb3JuaWExDjAMBgNVBAoMBVdhenVoMQ4w
6
+DAYDVQQLDAVXYXp1aDEOMAwGA1UEAwwFYWRtaW4wggEiMA0GCSqGSIb3DQEBAQUA
7
+A4IBDwAwggEKAoIBAQCkux590+ruNmYdiWYi8x2XTXBms1gTLthRqmxukIk8eoHj
8
+oXyGWarj+TtJ88jpWQ+P9KgeiSk3kVnJjEOt5OO7xLFM/LEWPqPk+YArmb4msUbW
9
+sO4Gh9+Xd7qlv0OFEcTDZ4kl/cli2T1fpodQlPGc3poxxF+B5ucePIto0E4Of1NW
10
+P5JtJy+wYm9vOhmxWxCKnHybNblLPdA2SQSfjRin890/BJVDkN1ENoZ6D39NKssq
11
+VUtsC9g1O7ModLu/O2BzMBzheyQ+OzeTHGZ1Tb7RXsGxi/NuMpS9xlFTjb891L2z
12
+r1knYxtWM96R+AMw/GHu9NpFHYX3vD24NUByeMtTAgMBAAGjQjBAMB0GA1UdDgQW
13
+BBT/6YRR+qitrGWLCspCDN4YgEAOczAfBgNVHSMEGDAWgBRv3BT6rAx2EzaS4Ia5
14
+p4vkW51juzANBgkqhkiG9w0BAQsFAAOCAQEAHnTkKTdpBsRJRpIAahhml9x+S1kD
15
+irUoSOyIG4PrXTmiA0qmAeTowU5PwnPm3Q1L/O/B+t4/sqRnuhGzUvWktcC1zjtu
16
+vgLczuEZxR3L2I9KbV8JNNenI5/a32kyEFlDvCrWmxmElqB68yIeYde4YZNRsSiR
17
+Y5+u37DNR1bXK1JYYgTDyLsAzCgM7THF36BABgz5ct+hFxj5TTJN+acBe7kQX4Sb
18
+IVIPKX/bpXe1O1t/Kfdt6VDeQwGbZLINMk1imzDQiMf23bqEWUGyRWVoF9jdfR7F
19
+8+Tge8YUB+G90g6uaTohKzgY56/h97zZWVEE3VsMWWEXXQyXoMQDP2JjzQ==
20
+-----END CERTIFICATE-----

+ 28 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/root-ca-manager.key

@@ -0,0 +1,28 @@
1
+-----BEGIN PRIVATE KEY-----
2
+MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDn8aUbvxKsmRQV
3
+3cjAxmfeLkjX5udyGg4cD6s/fSLjgFdW3RnHgXW4KZofFMgWUusAsDTRVp9NOkGW
4
+dRhSX4Q0vlS+t38Jm6PTxJvqkEGCWtCB2xVUQMeKMaL9FsDGbfEJOnoCbMyxQB29
5
+yz1cP4K3uwBeU3wCobBnf6Ui/Uo0h/psUAryOh38s7RIG3eitzq31JEmH2Y3LPsN
6
+MOE1xnHv+JrHGKnH9awCZAFhYpawSHVI2e9+N46tBFEi4QLV2VCBLPgWATGhkUVR
7
+ZQieG+bUn6ivruQZqeZGeJ3R9GGHXfBAkMKFSp0VwCnLglhfhGeWas1GY7bYiAcT
8
+UplUNflzAgMBAAECggEAY2D2DV1g8vLj2DqeuXpJJrlOHLOilxDy2rMb/KfxOujS
9
+gzVYxlKBzdaFYqvUzzvX1QOqnccvmjdLwtuJAEJMswyZ4t1cYRF+sE2dQHNunhur
10
+GvhzuxXGaT+7RhVpo5uXmwyjGkbjrU57b8aVE+FicLZ/AetjRv1gR/g5GTTNhpOr
11
+qzaEnF7Q+BP6Zu8JmhpvwFeUzWEl43rMBp0bn1qkZgH5AY8lJhK0tCy+J9NsFXDH
12
+tkOC3Pc3xiU4loUKwdMZy5JYI7xaSU6o2h8bTg/55DQtUwjTSc9ZuOKy2u2T+Ybn
13
+nxHStcU1AWPNEOdVjJvKCa0Hz1iSjFShNGggPN3+AQKBgQD5IxrLI4fG0Mx8lxF0
14
+vsEF+YZQVZk7U+13uThsDUu13DICcmdh1MyNf4LkjFY3727jrfR8DBtJof8R/+/4
15
+ZZ1tiDdX7JdOiy0ej5gC30XUbBZUst1U55ajIppNNdWZHbziXwMu9A0D5qwN2dft
16
+GQ0dJPrml2TKIwdX08FbgOJq0QKBgQDuVUuRnRHor5PnXTlp772E33833w6Gc4GY
17
+nK0F5HKIJFjDxH/qi9BVJj1H0miGDqBqAqcU6xb5pyCueSv1zuwi4A0TonGi2oDe
18
+LhvjWKAHI2mDxE+tqH8sTzlgPU0t32Rcvd/55esVjMrAS6Bw7Z6dZIlEzVVDBMo5
19
+mHZb+Y9pAwKBgDRz5aI7OszrDQJ2M+CmgLEnVdX4D6jkBK0eO/jT28rQL19Agu+g
20
+A+kOnZpMyaJBMNGSwFSVn/EiwDcj8XwUuM5kzXIfh8OrnbY/eTuxklwk3Za7icnk
21
+cFysXlw/J1dzYV8vrdXm4A6gND0+Ti3HBnHKZWDDIx9DvLoLBTykqAbhAoGAQbL9
22
+m+xijXQpH3RRaWSPJ9u8ZBh3FpUsuncmMyOgduseFQlMAcn86hwadHwKGDpb+h01
23
+Fc0gjj2GAtKgTah26747nJgBH1WAhL7NLUS3CIC4i3xIQqTaOcq1FFSRu/2C2xX0
24
+chzxSwV+treiSL8YJGccd/zqbgkZ/fqLVhtbbyUCgYBnfSX0Xpqzxd+WdWnSQq4c
25
+A4zc8A4vZZHknm9GPAmUYYadEEwF6xb1AQfPOht68n94DxD3iG5hhNfUAx/cm7Ww
26
+8eG/6OUai/u4DeY8ye2u2QhmhLLth489ylHNsRvGnNRYdrnHcYXpzvge8ITq8k/X
27
+E5bVFxSabknmo8a+SIelbA==
28
+-----END PRIVATE KEY-----

+ 20 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/root-ca-manager.pem

@@ -0,0 +1,20 @@
1
+-----BEGIN CERTIFICATE-----
2
+MIIDSzCCAjOgAwIBAgIUX6UzxTYrydj8YKmmXS05ZeFb5FIwDQYJKoZIhvcNAQEL
3
+BQAwNTEOMAwGA1UECwwFV2F6dWgxDjAMBgNVBAoMBVdhenVoMRMwEQYDVQQHDApD
4
+YWxpZm9ybmlhMB4XDTI2MDIxMjEwNTgwM1oXDTM2MDIxMDEwNTgwM1owNTEOMAwG
5
+A1UECwwFV2F6dWgxDjAMBgNVBAoMBVdhenVoMRMwEQYDVQQHDApDYWxpZm9ybmlh
6
+MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5/GlG78SrJkUFd3IwMZn
7
+3i5I1+bnchoOHA+rP30i44BXVt0Zx4F1uCmaHxTIFlLrALA00VafTTpBlnUYUl+E
8
+NL5Uvrd/CZuj08Sb6pBBglrQgdsVVEDHijGi/RbAxm3xCTp6AmzMsUAdvcs9XD+C
9
+t7sAXlN8AqGwZ3+lIv1KNIf6bFAK8jod/LO0SBt3orc6t9SRJh9mNyz7DTDhNcZx
10
+7/iaxxipx/WsAmQBYWKWsEh1SNnvfjeOrQRRIuEC1dlQgSz4FgExoZFFUWUInhvm
11
+1J+or67kGanmRnid0fRhh13wQJDChUqdFcApy4JYX4RnlmrNRmO22IgHE1KZVDX5
12
+cwIDAQABo1MwUTAdBgNVHQ4EFgQUb9wU+qwMdhM2kuCGuaeL5FudY7swHwYDVR0j
13
+BBgwFoAUb9wU+qwMdhM2kuCGuaeL5FudY7swDwYDVR0TAQH/BAUwAwEB/zANBgkq
14
+hkiG9w0BAQsFAAOCAQEAMY/83p2A2zLoqXPmbyjswEShB9qJae8EgVH4K54o3pAN
15
+lrUL3E073lVngsxOvU8siMXjVLZWd0hi2lhsfHIw22M7O4FjINS7STwDmdBID610
16
+5dz2NQJ5ipnWk4R4/Ex2vhZUOUK7PsMZe55PeMJ4irVNYInVwOUvMxRYYT7i8yfq
17
+JgaCmL/XKbfv0qMlzW55a9a7UYVNvKYzjgsHncMdInkgWZ1NC6Hf7YpL9bwgF9pd
18
+z/B8drsjIvdXF+cJ+zHHp3rWY6mOd2J+y4+9PpR9Z+OCQfTfUgLxUrnCZgoM9MEY
19
+1N/IVMAvKcirxx+zwfKE+X/uArosROdFGTbzRmdKRg==
20
+-----END CERTIFICATE-----

+ 28 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/root-ca.key

@@ -0,0 +1,28 @@
1
+-----BEGIN PRIVATE KEY-----
2
+MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDn8aUbvxKsmRQV
3
+3cjAxmfeLkjX5udyGg4cD6s/fSLjgFdW3RnHgXW4KZofFMgWUusAsDTRVp9NOkGW
4
+dRhSX4Q0vlS+t38Jm6PTxJvqkEGCWtCB2xVUQMeKMaL9FsDGbfEJOnoCbMyxQB29
5
+yz1cP4K3uwBeU3wCobBnf6Ui/Uo0h/psUAryOh38s7RIG3eitzq31JEmH2Y3LPsN
6
+MOE1xnHv+JrHGKnH9awCZAFhYpawSHVI2e9+N46tBFEi4QLV2VCBLPgWATGhkUVR
7
+ZQieG+bUn6ivruQZqeZGeJ3R9GGHXfBAkMKFSp0VwCnLglhfhGeWas1GY7bYiAcT
8
+UplUNflzAgMBAAECggEAY2D2DV1g8vLj2DqeuXpJJrlOHLOilxDy2rMb/KfxOujS
9
+gzVYxlKBzdaFYqvUzzvX1QOqnccvmjdLwtuJAEJMswyZ4t1cYRF+sE2dQHNunhur
10
+GvhzuxXGaT+7RhVpo5uXmwyjGkbjrU57b8aVE+FicLZ/AetjRv1gR/g5GTTNhpOr
11
+qzaEnF7Q+BP6Zu8JmhpvwFeUzWEl43rMBp0bn1qkZgH5AY8lJhK0tCy+J9NsFXDH
12
+tkOC3Pc3xiU4loUKwdMZy5JYI7xaSU6o2h8bTg/55DQtUwjTSc9ZuOKy2u2T+Ybn
13
+nxHStcU1AWPNEOdVjJvKCa0Hz1iSjFShNGggPN3+AQKBgQD5IxrLI4fG0Mx8lxF0
14
+vsEF+YZQVZk7U+13uThsDUu13DICcmdh1MyNf4LkjFY3727jrfR8DBtJof8R/+/4
15
+ZZ1tiDdX7JdOiy0ej5gC30XUbBZUst1U55ajIppNNdWZHbziXwMu9A0D5qwN2dft
16
+GQ0dJPrml2TKIwdX08FbgOJq0QKBgQDuVUuRnRHor5PnXTlp772E33833w6Gc4GY
17
+nK0F5HKIJFjDxH/qi9BVJj1H0miGDqBqAqcU6xb5pyCueSv1zuwi4A0TonGi2oDe
18
+LhvjWKAHI2mDxE+tqH8sTzlgPU0t32Rcvd/55esVjMrAS6Bw7Z6dZIlEzVVDBMo5
19
+mHZb+Y9pAwKBgDRz5aI7OszrDQJ2M+CmgLEnVdX4D6jkBK0eO/jT28rQL19Agu+g
20
+A+kOnZpMyaJBMNGSwFSVn/EiwDcj8XwUuM5kzXIfh8OrnbY/eTuxklwk3Za7icnk
21
+cFysXlw/J1dzYV8vrdXm4A6gND0+Ti3HBnHKZWDDIx9DvLoLBTykqAbhAoGAQbL9
22
+m+xijXQpH3RRaWSPJ9u8ZBh3FpUsuncmMyOgduseFQlMAcn86hwadHwKGDpb+h01
23
+Fc0gjj2GAtKgTah26747nJgBH1WAhL7NLUS3CIC4i3xIQqTaOcq1FFSRu/2C2xX0
24
+chzxSwV+treiSL8YJGccd/zqbgkZ/fqLVhtbbyUCgYBnfSX0Xpqzxd+WdWnSQq4c
25
+A4zc8A4vZZHknm9GPAmUYYadEEwF6xb1AQfPOht68n94DxD3iG5hhNfUAx/cm7Ww
26
+8eG/6OUai/u4DeY8ye2u2QhmhLLth489ylHNsRvGnNRYdrnHcYXpzvge8ITq8k/X
27
+E5bVFxSabknmo8a+SIelbA==
28
+-----END PRIVATE KEY-----

+ 20 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/root-ca.pem

@@ -0,0 +1,20 @@
1
+-----BEGIN CERTIFICATE-----
2
+MIIDSzCCAjOgAwIBAgIUX6UzxTYrydj8YKmmXS05ZeFb5FIwDQYJKoZIhvcNAQEL
3
+BQAwNTEOMAwGA1UECwwFV2F6dWgxDjAMBgNVBAoMBVdhenVoMRMwEQYDVQQHDApD
4
+YWxpZm9ybmlhMB4XDTI2MDIxMjEwNTgwM1oXDTM2MDIxMDEwNTgwM1owNTEOMAwG
5
+A1UECwwFV2F6dWgxDjAMBgNVBAoMBVdhenVoMRMwEQYDVQQHDApDYWxpZm9ybmlh
6
+MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5/GlG78SrJkUFd3IwMZn
7
+3i5I1+bnchoOHA+rP30i44BXVt0Zx4F1uCmaHxTIFlLrALA00VafTTpBlnUYUl+E
8
+NL5Uvrd/CZuj08Sb6pBBglrQgdsVVEDHijGi/RbAxm3xCTp6AmzMsUAdvcs9XD+C
9
+t7sAXlN8AqGwZ3+lIv1KNIf6bFAK8jod/LO0SBt3orc6t9SRJh9mNyz7DTDhNcZx
10
+7/iaxxipx/WsAmQBYWKWsEh1SNnvfjeOrQRRIuEC1dlQgSz4FgExoZFFUWUInhvm
11
+1J+or67kGanmRnid0fRhh13wQJDChUqdFcApy4JYX4RnlmrNRmO22IgHE1KZVDX5
12
+cwIDAQABo1MwUTAdBgNVHQ4EFgQUb9wU+qwMdhM2kuCGuaeL5FudY7swHwYDVR0j
13
+BBgwFoAUb9wU+qwMdhM2kuCGuaeL5FudY7swDwYDVR0TAQH/BAUwAwEB/zANBgkq
14
+hkiG9w0BAQsFAAOCAQEAMY/83p2A2zLoqXPmbyjswEShB9qJae8EgVH4K54o3pAN
15
+lrUL3E073lVngsxOvU8siMXjVLZWd0hi2lhsfHIw22M7O4FjINS7STwDmdBID610
16
+5dz2NQJ5ipnWk4R4/Ex2vhZUOUK7PsMZe55PeMJ4irVNYInVwOUvMxRYYT7i8yfq
17
+JgaCmL/XKbfv0qMlzW55a9a7UYVNvKYzjgsHncMdInkgWZ1NC6Hf7YpL9bwgF9pd
18
+z/B8drsjIvdXF+cJ+zHHp3rWY6mOd2J+y4+9PpR9Z+OCQfTfUgLxUrnCZgoM9MEY
19
+1N/IVMAvKcirxx+zwfKE+X/uArosROdFGTbzRmdKRg==
20
+-----END CERTIFICATE-----

+ 28 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/wazuh.dashboard-key.pem

@@ -0,0 +1,28 @@
1
+-----BEGIN PRIVATE KEY-----
2
+MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDiYuHoFpJQu1+J
3
++Di9Rk94azWptd5Oli38WDsTpiuIVeHsJCBxE814RQUKtG6gd/o4maLBtRCrJ+r2
4
+ShCvK6Fdf2taM8xziaAd5xgW2NVN/m51eOCippJiDA8Y1POznBONSq7MONT98tNt
5
+GssBotP8QoXnQJOrbh2tWc00WY15Vc7xWcPsPbwJ2t/cZUNHbpvxE0G8iPMn7ZAU
6
+gk1DfbWWaZrNyGo2IcW3ffVQUhoAgnFkvbjFJFqauVP3LRHbRg5s+XDYrDonf57E
7
+ST+KgMTO/EUshVd5pKiXGEc2kMGDX2AosfDSPP3YqBkAPnWk3GmEG/yMc82+qKu5
8
+M49jRA0lAgMBAAECggEACwjVO6hkZIyXHfTilYQCOghfvgns8Ax56VbgakpeZm+c
9
+aq1ht24Tq2arb3FXNkZWBMELiEDFgAVnra5ZtqNI/KZ9Td3aFVE1lyXweQydbDox
10
+zOzU/8vGtTWJQ6Vj/r+YysZDhuJhePW6pt+R2l6P0dGCqibuC7Xhh6Jc683oC2xM
11
+mrMGp/9tuXVKuatpcdpPqsuZ2AWTSdaYXVrcN4fD5f8ejO1UBSFBlOOCzMx2C47H
12
+sUxkSsEIuwYG9D/XUI+OGZXaaSulR3S2ubfsZo/URB1PMMNN+tZ1hiqx+2joubs3
13
+P2lyO5858/Bg5BaC2Gs9Ao1Z+DMNbXRNv7BLZ5V08QKBgQD9mPP5ai9FiNJ/3efY
14
+mSEZgt6oXdJ6GZyhe+rGRq8byh4nRBDJngFmVON6FkzjQUv64nj9dG5dn+Eu+l79
15
+LgGqabfUJu3IrjeW/z8EBaa8qIAuZKd8SfC4jotlEbCTfgsmfj8qCtH4IY9nDze0
16
+BtZq3g9o98ZjVEudFYwlaVUJqwKBgQDkh+8z/JmkBEj8QTAKSHKzc6ZIbS9AXRZZ
17
+aTYD0JIwUgwg9S3WotvwnqczVQG3Wr810DjfKIjUlaWHmhrLPHTGRPmdSROEI2dV
18
+qdYhvTAZqFBbJo1/A36RObTxCq22bquIHvLeep931nPsoYLVtL6WYw4SyIbA/Y35
19
+e5zHp2qUbwKBgQCVLpovWhjO5es2zzqpP4OqN0N2diLwMwriMDxvQXuXdHIClVbu
20
+1CVspnlfA6ldcrcYsouMRib6qqfUc/LXK25Nan16rx/okxwelq7iVdS9XL5zDEE+
21
+q1yRpUE5RovCaD50+YV83Pqh5lQuw1P4cqFGIrWcAU5Sdm84zEkyZOFimwKBgQCf
22
+BghpwIiZHXI8NpBbV3aZcQxwsamDvELlDNVNakGP5kgSVwoCpWku0ve+PJTpJfiQ
23
+Vch9YRN1+nwpFA85BWSs4ypfTI6MEKbDcV9UMvXZpMnl47nqfGACZomGgcvHetNZ
24
+8U9HiUSWe2BHdUw5sYA93cfZQjii6s10oZPDSrhbeQKBgQDUP+/X7EtB+YQs6UXP
25
++wQuhawvAJiuhIz32/WrwzsWPfB65xtq/L6E33l/UQC8otif2CW1iAYUKM9Vmkdz
26
+DEEJjHipKpaZ11fnhIQVp+q8SNRhGyTycDJlaxxk/aR7Txdj+56qAesafQ48nW6Q
27
+pRjpUOIsJ/zvC4NCkEZP0sjniw==
28
+-----END PRIVATE KEY-----

+ 22 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/wazuh.dashboard.pem

@@ -0,0 +1,22 @@
1
+-----BEGIN CERTIFICATE-----
2
+MIIDlTCCAn2gAwIBAgIULFbK3UZ6IzUQxV7OmO6HixZeHEkwDQYJKoZIhvcNAQEL
3
+BQAwNTEOMAwGA1UECwwFV2F6dWgxDjAMBgNVBAoMBVdhenVoMRMwEQYDVQQHDApD
4
+YWxpZm9ybmlhMB4XDTI2MDIxMjEwNTgwNFoXDTM2MDIxMDEwNTgwNFowXDELMAkG
5
+A1UEBhMCVVMxEzARBgNVBAcMCkNhbGlmb3JuaWExDjAMBgNVBAoMBVdhenVoMQ4w
6
+DAYDVQQLDAVXYXp1aDEYMBYGA1UEAwwPd2F6dWguZGFzaGJvYXJkMIIBIjANBgkq
7
+hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4mLh6BaSULtfifg4vUZPeGs1qbXeTpYt
8
+/Fg7E6YriFXh7CQgcRPNeEUFCrRuoHf6OJmiwbUQqyfq9koQryuhXX9rWjPMc4mg
9
+HecYFtjVTf5udXjgoqaSYgwPGNTzs5wTjUquzDjU/fLTbRrLAaLT/EKF50CTq24d
10
+rVnNNFmNeVXO8VnD7D28Cdrf3GVDR26b8RNBvIjzJ+2QFIJNQ321lmmazchqNiHF
11
+t331UFIaAIJxZL24xSRamrlT9y0R20YObPlw2Kw6J3+exEk/ioDEzvxFLIVXeaSo
12
+lxhHNpDBg19gKLHw0jz92KgZAD51pNxphBv8jHPNvqiruTOPY0QNJQIDAQABo3Yw
13
+dDAfBgNVHSMEGDAWgBRv3BT6rAx2EzaS4Ia5p4vkW51juzAJBgNVHRMEAjAAMAsG
14
+A1UdDwQEAwIE8DAaBgNVHREEEzARgg93YXp1aC5kYXNoYm9hcmQwHQYDVR0OBBYE
15
+FFg7twssEPAIfjNzs+t1+qGf/7wUMA0GCSqGSIb3DQEBCwUAA4IBAQBdGj82dYej
16
+iAzxyT3C387qy8/PugXFQYOmsSgdzdvfyqxsnc6IVlh0DTNEj/f6ortd0Wk04fku
17
+ekm6nD3lYYjTBZo9NB/2vg2P7PFVMhT44hIRF3hA+40WyMso6Un3xr4moBjd3PyM
18
+yj39vPwZnWvD6hOC5pAIz+QMCVlCz+TVCnhq7vpdfCb1kJOxmrXrjUZtYumdXnAY
19
+2x4RfTlgRp3ePzOenknkpMgzXkkMf/ZXvxdoAouLW00bufulSX8WcnrOWbMKv5Lg
20
+QBJhRnXRaJ+GCqBhTWxaigdWBtiMS6lxrIFfj5T7DvRibT1uQzXwnb1O1OoLi3Sn
21
+TatH5kKvLje6
22
+-----END CERTIFICATE-----

+ 28 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/wazuh.indexer-key.pem

@@ -0,0 +1,28 @@
1
+-----BEGIN PRIVATE KEY-----
2
+MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC03QaMEhErjsjV
3
+qsgJ5UYUgMwAR8tia0O4LdT5FvMtq7Aey40CH81gQ0yd8wkOIA+ZKr7TIzqVp9QS
4
+epHDukfdIz3qoM8p9mR39KZByG1VYrxhP88TGadjdEfZTqkY8mXz9PSoECVnPR5i
5
+gcjsg8IlqfgVkcOdmKwwCM0BsGakeJbu/gJwHE1Xhfi/eH+5XFK9kXaqzcZ61/9T
6
+bAR7OSt3my1ug+F3dU17KDD2ySbMQ5EyVTlpg30i9fDaFM+/ZPxEOnWDrQIN6DXr
7
+T0VkDmwBWn+8NppwLdJNnmlX4BLC+NClo8Sh9NhmMFZPb2IP2tdI2mGkuwswLeq4
8
+HvYzqHxhAgMBAAECggEAFlgSBX776Q4wSJlbMkHFeSX6TfSQr12CFcBvxcAsldFW
9
+pk9OYdtM854M2pyaW0jhtHH/9jSteaysugWzeWNLmonOjeyE+3GpstoiKhFZVImo
10
+rTkFW545PEOy1qltoZvctZlnGlY4ULtPxCq1iGa0txN7ByslaBRi/WIw9Yr1+06G
11
+yzG8h/i3vDivgpETU4/mytxionrPtEMDkoU+iT35nC0P8W42haU7Y66ilqnKdkoO
12
+j4M28PUlqPRLTbkDzhDds+8hcn7W2qbmSyRuWQPdJmCI8HypV8AhEopAtA4Zdf+I
13
+EXIdkJD34LcfnFfzGTJdcJB4q2ryx0Mb5+GdD4D/SwKBgQD6cfqF6suoTHeUYqME
14
++8dNw08X1rPm6n5kcCDIkKDbqijGD2FR8Rw8cmFK3QAhyLJmBDQG+ablUo98WwTY
15
+O05fvzvhUyfZXpEtrZIvVKd0/gbAf6tjxpbrYwOK0vErrVkEhZKRCSy4ryEsm1ev
16
+ircIZ0bZvKpWrRGfZOB4B6esywKBgQC43/aMGibWn4IPti90HLMPzHQ/uFd15Gaz
17
+S9zFzjeIKGyI50OSbCDIydrrsBJ/S9zGA6g91HwvJz39rFB3ykKp1lasCrH7qQoz
18
+Yif5IX9GPdjqIF5SDATgijifXgBK2pevYnnsCcUPA8AG7I+9O/GoHnXxvMqdxY+B
19
+0fR9yeuiAwKBgG2VWc0nA53Md7ZRwor3sClygDUqGOW9TTidZ3ra7cewGM8xA6Bb
20
+I8O/OJsNQHWH36eLx0gWDNTi3y3GlcQXjx+OCaF6RUFzg4q9G+3h2LP0QvgP5Opv
21
+hrHQTUh9LFG0M/MqjwsvPIZC+v0Nq7x/sb7XkcTMLKxoZgGcnitnDhMpAoGAYWDW
22
+pIVB39q0z0HPTQGw76lpsgaPSvG7hsV2zFoKthVU1ee6l+2MdzabsXlUxOhYqZRT
23
+kf3SS6QH6w5QdEh9RKg5jvUzOrOXQ+l31KnoOD9reicCh4T9LKihmpAQ51yseR0N
24
+y156BaacBwmjzLE+YKdqyKIAt4nQRTkp5vfsvbECgYEA8biHP5np/CUwWlYEbZQa
25
+vhHJax0l3hxMGTYT9K+GM8rlcEKxfsdWyDWOIUXrz8KKoixS1QTwqOqeAgPkTGRa
26
+b28vgFV4s0SGXTVeGXspBL0wWBT9K9d59PhmNSDOyPlsr9PBcUorL0UPJSb2CMzf
27
+uNOf6EKPJ3jBN9Djea2ui/o=
28
+-----END PRIVATE KEY-----

+ 22 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/wazuh.indexer.pem

@@ -0,0 +1,22 @@
1
+-----BEGIN CERTIFICATE-----
2
+MIIDkTCCAnmgAwIBAgIULFbK3UZ6IzUQxV7OmO6HixZeHEcwDQYJKoZIhvcNAQEL
3
+BQAwNTEOMAwGA1UECwwFV2F6dWgxDjAMBgNVBAoMBVdhenVoMRMwEQYDVQQHDApD
4
+YWxpZm9ybmlhMB4XDTI2MDIxMjEwNTgwNFoXDTM2MDIxMDEwNTgwNFowWjELMAkG
5
+A1UEBhMCVVMxEzARBgNVBAcMCkNhbGlmb3JuaWExDjAMBgNVBAoMBVdhenVoMQ4w
6
+DAYDVQQLDAVXYXp1aDEWMBQGA1UEAwwNd2F6dWguaW5kZXhlcjCCASIwDQYJKoZI
7
+hvcNAQEBBQADggEPADCCAQoCggEBALTdBowSESuOyNWqyAnlRhSAzABHy2JrQ7gt
8
+1PkW8y2rsB7LjQIfzWBDTJ3zCQ4gD5kqvtMjOpWn1BJ6kcO6R90jPeqgzyn2ZHf0
9
+pkHIbVVivGE/zxMZp2N0R9lOqRjyZfP09KgQJWc9HmKByOyDwiWp+BWRw52YrDAI
10
+zQGwZqR4lu7+AnAcTVeF+L94f7lcUr2RdqrNxnrX/1NsBHs5K3ebLW6D4Xd1TXso
11
+MPbJJsxDkTJVOWmDfSL18NoUz79k/EQ6dYOtAg3oNetPRWQObAFaf7w2mnAt0k2e
12
+aVfgEsL40KWjxKH02GYwVk9vYg/a10jaYaS7CzAt6rge9jOofGECAwEAAaN0MHIw
13
+HwYDVR0jBBgwFoAUb9wU+qwMdhM2kuCGuaeL5FudY7swCQYDVR0TBAIwADALBgNV
14
+HQ8EBAMCBPAwGAYDVR0RBBEwD4INd2F6dWguaW5kZXhlcjAdBgNVHQ4EFgQUL2Gn
15
+QSqH2NsnDES7MfEod84Syk8wDQYJKoZIhvcNAQELBQADggEBAMy4ijWm7m3zzEQL
16
+2AGtguF9VjSh2voCqdleNYgkkaDfbRJxl5woS+e03hjTAgVcOJwtbxa8vC2Mcfwf
17
+JxYAxZlSlHEJOpZDTC4/tTbbuccRTz9D1PqzgC9+w0dKknjFikqblQrhNRaSvnl8
18
+oPNwsR9lrl/qWXSH7Mpt6ryIv+SMTVUddbAZe1d44HvEgei9sgB36fOp2P+Rebp6
19
+MWdEOlmhqCOWFwwOhs1V3IgKkhOfveqFgguUX7iZAtUF1daW83/qDvCF2EXWFXsJ
20
+/tg3o8aYpN/w6JcBA4qcxVRJODOPGR1YFWWK9+IPPyGzRsa/rdmt9AWkHEd2T1rw
21
+qpRtfRw=
22
+-----END CERTIFICATE-----

+ 28 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/wazuh.manager-key.pem

@@ -0,0 +1,28 @@
1
+-----BEGIN PRIVATE KEY-----
2
+MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC3V1oiPYhG611G
3
+erjzSFBAlg1y4XA7/reGBhFsJKTA+nQ0DVd95QFSDMyh09ZX+Qz3Cdsqqi4Nonrk
4
+1Z4wZoBx2JVpofQOdkOsKTQWH4R+sOGK7Ad0nnYXQjs5QjvQ7V0RRVgsaP0CNFj6
5
+hRgr9Oj0EGuVGDK7r+KXYyrOvAzcXYV2qrAzFpB3sP4FN6Lvlvb1/uYBceKxK7ol
6
+3WoUD2pEKm+Fl8sUH17kY0crIqvEsZtx9Tj51iQwDu09GKQvxoFUdkpIRjedlTO4
7
+HND4RP8hhJnwA6lmMZm4vNLhuwPef+VZFCN1Xyygy+tDQ4IB1wIloGinOtGjRnKP
8
+P91ounqBAgMBAAECggEAANQqAeQd0NZBCG/HFMBzrmsPOgD3YSoMWdR+sSq0PPQU
9
+4ORbjCPkHuMUbLnYqvKjAp3eigGVbjXZEN4/VhAsjfcw67aR9BvIQAe6psvzLSBv
10
+EJHzEa2isoW331EjlJTyGgEifibmV+N1MIK4AoDbqbjGR7kBh7Qqlc0atS8H7toS
11
+agtrBaNZfoIpZRhALVWpnRgeL+1+t5tNUsDt8k4zLpPIZQ1M1jJSwLqaGqGREIUT
12
+p3m41mQ9NFneYSWLZmcI7L0YgLz9vpxdFQMgMpDfzP//sB4INMSzb34NphiNh281
13
+vf3vp2FnQfhDiI8w7w6aFPMHn1/79YCIukuMlfF/LQKBgQDeRVijws6niJqug0nw
14
+AU0XGRa/VCJj+GU0WtEUMVinwy+wFias807jLXuj9LGdTtAQRHkv9SUnOkna/SYk
15
+taqP6wRFuYO9U4pphmsEfpL5Fl76zQASGAjx58vMjPQSsx/u8ixDZqWcb9TK+oBS
16
+Lzz+OGZHZoVHX4NiM3NkZqt6lQKBgQDTKbAuhf7GtQ4vUB1sbhpkSSfal/tVJ8zZ
17
+IlLOOrqoxbVs1lJSMfV3/y+BT1qizQlcCYOKwaednHwRlqR4LXkz7hKJxyZM1YGo
18
+Ylaq0xJGGSV1jtc9m5RHuupr9EB2CowYyv3I4mN2YHlkRYs9bzH7rFIMKe7uu3Nd
19
+agiiD5jxPQKBgQDLVSL2hHKqPkRK2x3bakVMmQ3/L4dabtSeZWoZD99rcRqB+nGd
20
+C+Oh3WzbGzEUmBGsoAdBAQDg9uizZZvsPyhuCe/ZnRFQNElNqcLi1Ku9JGL1Cm5D
21
+HyunqIX+dP+ez7Cp1W76pb9g8cj3etvC0yX35j5imP3Zwh2dyzWHpoi+VQKBgFH5
22
+ox3MgwXdD+6qKWIItFIuXDxuN/HtC4dX3dGV1xTh+/aOlVK3dlXpSSXoCoWdF38V
23
+am2ZlFqJf1jMpHjLHnxcdfHq0CGP2U/nLUIPws5XwMUMeN6/4Safl5XlMokguxZ8
24
+51zvFjHEbhvRK6bj3gGX+hoixVEEkFq5aTSQ3Yz5AoGBAITv4dOTGTPPVTMCO5x6
25
+6zjYinAND2+0OWSuy3qRkmv/ePBNr2YPJtVuwcqNS+gPSt9JirDtE418sXhMuA6T
26
+HJcwlBPkMg3hrr4TZVD6S1+7qV4bcGpi1/wXNqSYG/EyeFMdrk0HUk1axeQqPUGz
27
+mtHqQxmSyXTXnQQ8BlU/5mda
28
+-----END PRIVATE KEY-----

+ 22 - 0
wazuh-docker/single-node/config/wazuh_indexer_ssl_certs/wazuh.manager.pem

@@ -0,0 +1,22 @@
1
+-----BEGIN CERTIFICATE-----
2
+MIIDkTCCAnmgAwIBAgIULFbK3UZ6IzUQxV7OmO6HixZeHEgwDQYJKoZIhvcNAQEL
3
+BQAwNTEOMAwGA1UECwwFV2F6dWgxDjAMBgNVBAoMBVdhenVoMRMwEQYDVQQHDApD
4
+YWxpZm9ybmlhMB4XDTI2MDIxMjEwNTgwNFoXDTM2MDIxMDEwNTgwNFowWjELMAkG
5
+A1UEBhMCVVMxEzARBgNVBAcMCkNhbGlmb3JuaWExDjAMBgNVBAoMBVdhenVoMQ4w
6
+DAYDVQQLDAVXYXp1aDEWMBQGA1UEAwwNd2F6dWgubWFuYWdlcjCCASIwDQYJKoZI
7
+hvcNAQEBBQADggEPADCCAQoCggEBALdXWiI9iEbrXUZ6uPNIUECWDXLhcDv+t4YG
8
+EWwkpMD6dDQNV33lAVIMzKHT1lf5DPcJ2yqqLg2ieuTVnjBmgHHYlWmh9A52Q6wp
9
+NBYfhH6w4YrsB3SedhdCOzlCO9DtXRFFWCxo/QI0WPqFGCv06PQQa5UYMruv4pdj
10
+Ks68DNxdhXaqsDMWkHew/gU3ou+W9vX+5gFx4rEruiXdahQPakQqb4WXyxQfXuRj
11
+Rysiq8Sxm3H1OPnWJDAO7T0YpC/GgVR2SkhGN52VM7gc0PhE/yGEmfADqWYxmbi8
12
+0uG7A95/5VkUI3VfLKDL60NDggHXAiWgaKc60aNGco8/3Wi6eoECAwEAAaN0MHIw
13
+HwYDVR0jBBgwFoAUb9wU+qwMdhM2kuCGuaeL5FudY7swCQYDVR0TBAIwADALBgNV
14
+HQ8EBAMCBPAwGAYDVR0RBBEwD4INd2F6dWgubWFuYWdlcjAdBgNVHQ4EFgQUI45X
15
+qnT7ZcIKKm2SQTq+gKtsLX8wDQYJKoZIhvcNAQELBQADggEBAKGlp8dGRi7kf20b
16
+Zud+mnM/mLJD6/AlOf9IcSbE/zWLiCsx8eLtNh/NIJoY6q4+guNhA9A+15MdA+NK
17
+oTHNcu485K9duHCB+1eDojIqbenZRKpes9icCZem0tE39VLJxGcVrrAZflvA2F6Q
18
+LSzrlr+UbywjH/2zvk86ez0g9eIqDmvJF7mOkICxD5Q6Dy30ePpSrGJTKhF1CG8m
19
+0hfbdNesGfcKCxePfXNS2lVJPSwN6r/5J5tSFxHGzggKXO0UrRblzO6ViiWblavb
20
+mcvvQ37V+E+WZ29vDwHGfK+ozl2DT2uNlvcr6iC4zlDUQ1nza8n2InuzuBp8deyT
21
+i6/AQf8=
22
+-----END CERTIFICATE-----

tum/freight - Gogs: Simplico Git Service

No Description

0054_auto_20220606_1336.py 615B

123456789101112131415161718192021222324
  1. # Generated by Django 3.2.13 on 2022-06-06 06:36
  2. from django.db import migrations, models
  3. class Migration(migrations.Migration):
  4. dependencies = [
  5. ('pos', '0053_auto_20220606_1058'),
  6. ]
  7. operations = [
  8. migrations.AddField(
  9. model_name='bookingtable',
  10. name='qrCode',
  11. field=models.FileField(blank=True, null=True, upload_to='uploads/%Y/%m/%d/'),
  12. ),
  13. migrations.AddField(
  14. model_name='table',
  15. name='qrCode',
  16. field=models.FileField(blank=True, null=True, upload_to='uploads/%Y/%m/%d/'),
  17. ),
  18. ]