Contributed – Linux.com https://www.linux.com News For Open Source Professionals Mon, 28 Apr 2025 11:43:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 A Simple Way to Install Talos Linux on Any Machine, with Any Provider https://www.linux.com/thelinuxfoundation/a-simple-way-to-install-talos-linux-on-any-machine-with-any-provider/ Sun, 27 Apr 2025 23:40:06 +0000 https://www.linux.com/?p=585954 Talos Linux is a specialized operating system designed for running Kubernetes. First and foremost it handles full lifecycle management for Kubernetes control-plane components. On the other hand, Talos Linux focuses on security, minimizing the user’s ability to influence the system. A distinctive feature of this OS is the near-complete absence of executables, including the absence […]

The post A Simple Way to Install Talos Linux on Any Machine, with Any Provider appeared first on Linux.com.

]]>

Talos Linux is a specialized operating system designed for running Kubernetes. First and foremost it handles full lifecycle management for Kubernetes control-plane components. On the other hand, Talos Linux focuses on security, minimizing the user’s ability to influence the system. A distinctive feature of this OS is the near-complete absence of executables, including the absence of a shell and the inability to log in via SSH. All configuration of Talos Linux is done through a Kubernetes-like API.

Talos Linux is provided as a set of pre-built images for various environments.

The standard installation method assumes you will take a prepared image for your specific cloud provider or hypervisor and create a virtual machine from it. Or go the bare metal route and load  the Talos Linux image using ISO or PXE methods.

Unfortunately, this does not work when dealing with providers that offer a pre-configured server or virtual machine without letting you upload a custom image or even use an ISO for installation through KVM. In that case, your choices are limited to the distributions the cloud provider makes available.

Usually during the Talos Linux installation process, two questions need to be answered: (1) How to load and boot the Talos Linux image, and (2) How to prepare and apply the machine-config (the main configuration file for Talos Linux) to that booted image. Let’s talk about each of these steps.

Booting into Talos Linux

One of the most universal methods is to use a Linux kernel mechanism called kexec.

kexec is both a utility and a system call of the same name. It allows you to boot into a new kernel from the existing system without performing a physical reboot of the machine. This means you can download the required vmlinuz and initramfs for Talos Linux, and then, specify the needed kernel command line and immediately switch over to the new system. It is as if the kernel were loaded by the standard bootloader at startup, only in this case your existing Linux operating system acts as the bootloader.

Essentially, all you need is any Linux distribution. It could be a physical server running in rescue mode, or even a virtual machine with a pre-installed operating system. Let’s take a look at a case using Ubuntu on, but it can be literally any other Linux distribution.

Log in via SSH and install the kexec-tools package, it contains the kexec utility, which you’ll need later:

apt install kexec-tools -y

Next, you need to download the Talos Linux, that is the kernel and initramfs. They can be downloaded from the official repository:

wget -O /tmp/vmlinuz https://github.com/siderolabs/talos/releases/latest/download/vmlinuz-amd64
wget -O /tmp/initramfs.xz https://github.com/siderolabs/talos/releases/latest/download/initramfs-amd64.xz

If you have a physical server rather than a virtual one, you’ll need to build your own image with all the necessary firmware using Talos Factory service. Alternatively, you can use the pre-built images from the Cozystack project (a solution for building clouds we created at Ænix and transferred to CNCF Sandbox) – these images already include all required modules and firmware:

wget -O /tmp/vmlinuz https://github.com/cozystack/cozystack/releases/latest/download/kernel-amd64
wget -O /tmp/initramfs.xz https://github.com/cozystack/cozystack/releases/latest/download/initramfs-metal-amd64.xz

Now you need the network information that will be passed to Talos Linux at boot time. Below is a small script that gathers everything you need and sets environment variables:

IP=$(ip -o -4 route get 8.8.8.8 | awk -F"src " '{sub(" .*", "", $2); print $2}')
GATEWAY=$(ip -o -4 route get 8.8.8.8 | awk -F"via " '{sub(" .*", "", $2); print $2}')
ETH=$(ip -o -4 route get 8.8.8.8 | awk -F"dev " '{sub(" .*", "", $2); print $2}')
CIDR=$(ip -o -4 addr show "$ETH" | awk -F"inet $IP/" '{sub(" .*", "", $2); print $2; exit}')
NETMASK=$(echo "$CIDR" | awk '{p=$1;for(i=1;i<=4;i++){if(p>=8){o=255;p-=8}else{o=256-2^(8-p);p=0}printf(i<4?o".":o"\n")}}')
DEV=$(udevadm info -q property "/sys/class/net/$ETH" | awk -F= '$1~/ID_NET_NAME_ONBOARD/{print $2; exit} $1~/ID_NET_NAME_PATH/{v=$2} END{if(v) print v}')

You can pass these parameters via the kernel cmdline. Use ip= parameter to configure the network using the Kernel level IP configuration mechanism for this. This method lets the kernel automatically set up interfaces and assign IP addresses during boot, based on information passed through the kernel cmdline. It’s a built-in kernel feature enabled by the CONFIG_IP_PNP option. In Talos Linux, this feature is enabled by default. All you need to do is provide a properly formatted network settings in the kernel cmdline.

Set the CMDLINE variable with the ip option that contains the current system’s settings, and then print it out:

CMDLINE="init_on_alloc=1 slab_nomerge pti=on console=tty0 console=ttyS0 printk.devkmsg=on talos.platform=metal ip=${IP}::${GATEWAY}:${NETMASK}::${DEV}:::::"
echo $CMDLINE

The output should look something like:

init_on_alloc=1 slab_nomerge pti=on console=tty0 console=ttyS0 printk.devkmsg=on talos.platform=metal ip=10.0.0.131::10.0.0.1:255.255.255.0::eno2np0:::::

Verify that everything looks correct, then load our new kernel:

kexec -l /tmp/vmlinuz --initrd=/tmp/initramfs.xz --command-line="$CMDLINE"
kexec -e

The first command loads the Talos kernel into RAM, the second command switches the current system to this new kernel.

As a result, you’ll get a running instance of Talos Linux with networking configured. However it’s currently running entirely in RAM, so if the server reboots, the system will return to its original state (by loading the OS from the hard drive, e.g., Ubuntu).

Applying machine-config and installing Talos Linux on disk

To install Talos Linux persistently on the disk and replace the current OS, you need to apply a machine-config specifying the disk to install. To configure the machine, you can use either the official talosctl utility or the Talm, utility maintained by the Cozystack project (Talm works with vanilla Talos Linux as well).

First, let’s consider configuration using talosctl. Before applying the config, ensure it includes network settings for your node; otherwise, after reboot, the node won’t configure networking. During installation, the bootloader is written to disk and does not contain the ip option for kernel autoconfiguration.

Here’s an example of a config patch containing the necessary values:

# node1.yaml
machine:
  install:
    disk: /dev/sda
  network:
    hostname: node1
    nameservers:
    - 1.1.1.1
    - 8.8.8.8
    interfaces:
    - interface: eno2np0
      addresses:
      - 10.0.0.131/24
      routes:
      - network: 0.0.0.0/0
        gateway: 10.0.0.1

You can use it to generate a full machine-config:

talosctl gen secrets
talosctl gen config --with-secrets=secrets.yaml --config-patch-control-plane=@node1.yaml <cluster-name> <cluster-endpoint>

Review the resulting config and apply it to the node:

talosctl apply -f controlplane.yaml -e 10.0.0.131 -n 10.0.0.131 -i 

Once you apply controlplane.yaml, the node will install Talos on the /dev/sda disk, overwriting the existing OS, and then reboot.

All you need now is to run the bootstrap command to initialize the etcd cluster:

talosctl --talosconfig=talosconfig bootstrap -e 10.0.0.131 -n 10.0.0.131

You can view the node’s status at any time using dashboard commnad:

talosctl --talosconfig=talosconfig dashboard -e 10.0.0.131 -n 10.0.0.131

As soon as all services reach the Ready state, retrieve the kubeconfig and you’ll be able to use your newly installed Kubernetes:

talosctl --talosconfig=talosconfig kubeconfig kubeconfig
export KUBECONFIG=${PWD}/kubeconfig

Use Talm for configuration management

When you have a lot of configs, you’ll want a convenient way to manage them. This is especially useful with bare-metal nodes, where each node may have different disks, interfaces and specific network settings. As a result, you might need to hold a patch for each node.

To solve this, we developed Talm — a configuration manager for Talos Linux that works similarly to Helm.

The concept is straightforward: you have a common config template with lookup functions, and when you generate a configuration for a specific node, Talm dynamically queries the Talos API and substitutes values into the final config.

Talm includes almost all of the features of talosctl, adding a few extras. It can generate configurations from Helm-like templates, and remember the node and endpoint parameters for each node in the resulting file, so you don’t have to specify these parameters every time you work with a node.

Let me show how to perform the same steps to install Talos Linux using Talm:

First, initialize a configuration for a new cluster:

mkdir talos
cd talos
talm init

Adjust values for your cluster in values.yaml:

endpoint: "https://10.0.0.131:6443"
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
advertisedSubnets:
- 10.0.0.0/24

Generate a config for your node:

talm template -t templates/controlplane.yaml -e 10.0.0.131 -n 10.0.0.131 > nodes/node1.yaml

The resulting output will look something like:

# talm: nodes=["10.0.0.131"], endpoints=["10.0.0.131"], templates=["templates/controlplane.yaml"]
# THIS FILE IS AUTOGENERATED. PREFER TEMPLATE EDITS OVER MANUAL ONES.
machine:
  type: controlplane
  kubelet:
    nodeIP:
      validSubnets:
        - 10.0.0.0/24
  network:
    hostname: node1
    # -- Discovered interfaces:
    # eno2np0:
    #   hardwareAddr:a0:36:bc:cb:eb:98
    #   busPath: 0000:05:00.0
    #   driver: igc
    #   vendor: Intel Corporation
    #   product: Ethernet Controller I225-LM)
    interfaces:
      - interface: eno2np0
        addresses:
          - 10.0.0.131/24
        routes:
          - network: 0.0.0.0/0
            gateway: 10.0.0.1
    nameservers:
      - 1.1.1.1
      - 8.8.8.8
  install:
    # -- Discovered disks:
    # /dev/sda:
    #    model: SAMSUNG MZQL21T9HCJR-00A07
    #    serial: S64GNG0X444695
    #    wwid: eui.36344730584446950025384700000001
    #    size: 1.9 TB
    disk: /dev/sda
cluster:
  controlPlane:
    endpoint: https://10.0.0.131:6443
  clusterName: talos
  network:
    serviceSubnets:
      - 10.96.0.0/16
  etcd:
    advertisedSubnets:
      - 10.0.0.0/24

All that remains is to apply it to your node:

talm apply -f nodes/node1.yaml -i 


Talm automatically detects the node address and endpoint from the “modeline” (a conditional comment at the top of the file) and applies the config.

You can also run other commands in the same way without specifying node address and endpoint options. Here are a few examples:

View the node status using the built-in dashboard command:

talm dashboard -f nodes/node1.yaml

Bootstrap etcd cluster on node1:

talm bootstrap -f nodes/node1.yaml

Save the kubeconfig to your current directory:

talm kubeconfig kubeconfig -f nodes/node1.yaml

Unlike the official talosctl utility, the generated configs do not contain secrets, allowing them to be stored in git without additional encryption. The secrets are stored at the root of your project and only in these files: secrets.yaml, talosconfig, and kubeconfig.

Summary

That’s our complete scheme for installing Talos Linux in nearly any situation. Here’s a quick recap:

  1. Use kexec to run Talos Linux on any existing system.
  2. Make sure the new kernel has the correct network settings, by collecting them from the current system and passing via the ip parameter in the cmdline. This lets you connect to the newly booted system via the API.
  3. When the kernel is booted via kexec, Talos Linux runs entirely in RAM. To install Talos on disk, apply your configuration using either talosctl or Talm.
  4. When applying the config, don’t forget to specify network settings for your node, because on-disk bootloader configuration doesn’t automatically have them.
  5. Enjoy your newly installed and fully operational Talos Linux.

Additional materials:

The post A Simple Way to Install Talos Linux on Any Machine, with Any Provider appeared first on Linux.com.

]]>
Using OpenTelemetry and the OTel Collector for Logs, Metrics, and Traces https://www.linux.com/news/using-opentelemetry-and-the-otel-collector-for-logs-metrics-and-traces/ Fri, 04 Apr 2025 18:16:05 +0000 https://www.linux.com/?p=585947 OpenTelemetry (fondly known as OTel) is an open-source project that provides a unified set of APIs, libraries, agents, and instrumentation to capture and export logs, metrics, and traces from applications. The project’s goal is to standardize observability across various services and applications, enabling better monitoring and troubleshooting. Read More at Causely

The post Using OpenTelemetry and the OTel Collector for Logs, Metrics, and Traces appeared first on Linux.com.

]]>
OpenTelemetry (fondly known as OTel) is an open-source project that provides a unified set of APIs, libraries, agents, and instrumentation to capture and export logs, metrics, and traces from applications. The project’s goal is to standardize observability across various services and applications, enabling better monitoring and troubleshooting.

Read More at Causely

The post Using OpenTelemetry and the OTel Collector for Logs, Metrics, and Traces appeared first on Linux.com.

]]>
Bridging Design and Runtime Gaps: AsyncAPI in Event-Driven Architecture https://www.linux.com/news/bridging-design-and-runtime-gaps-asyncapi-in-event-driven-architecture/ Sun, 25 Feb 2024 13:42:15 +0000 https://www.linux.com/?p=585766 The AsyncAPI specification emerged in response to the growing need for a standardized and comprehensive framework that addresses the challenges of designing and documenting asynchronous APIs. It is a collaborative effort of leading tech companies, open source      communities, and individual contributors who actively participated in the creation and evolution of the AsyncAPI specification.  […]

The post Bridging Design and Runtime Gaps: AsyncAPI in Event-Driven Architecture appeared first on Linux.com.

]]>
The AsyncAPI specification emerged in response to the growing need for a standardized and comprehensive framework that addresses the challenges of designing and documenting asynchronous APIs. It is a collaborative effort of leading tech companies, open source      communities, and individual contributors who actively participated in the creation and evolution of the AsyncAPI specification. 

Various approaches exist for implementing asynchronous interactions and APIs, each tailored to specific use cases and requirements. Despite this diversity, these approaches fundamentally share a common baseline of key concepts. Whether it’s messaging queues, event-driven architectures, or other asynchronous paradigms, the overarching principles remain consistent. 

Leveraging this shared foundation, AsyncAPI taps into a spectrum of techniques, providing developers with a unified understanding of essential concepts. This strategic approach not only fosters interoperability but also enhances flexibility across various asynchronous implementations, delivering significant benefits to developers.

From planning to execution: Design and runtime phases of EDA

The design time and runtime refer to distinct phases in the lifecycle of an event-driven system, each serving distinct purposes:

Design time: This phase occurs during the design and development of the event-driven system, where architects and developers plan and structure the system engaging in activities around:

  • Designing event flows
  • Schema definition
  • Topic or channel design
  • Error handling and retry policies
  • Security considerations
  • Versioning strategies
  • Metadata management
  • Testing and validation
  • Documentation
  • Collaboration and communication
  • Performance considerations
  • Monitoring and observability

The design phase yields assets, including a well-defined and configured messaging infrastructure. This encompasses components such as brokers, queues, topics/channels, schemas, and security settings, all tailored to meet specific requirements. The nature of these assets may vary based on the choice of the messaging system.

Runtime: This phase occurs when the system is in operation, actively processing events based on the design-time configurations and settings, responding to triggers in real time.

  • Dynamic event routing
  • Concurrency management
  • Scalability adjustments
  • Load balancing
  • Distributed tracing
  • Alerting and notification
  • Adaptive scaling
  • Monitoring and troubleshooting
  • Integration with external systems

The output of this phase is the ongoing operation of the messaging platform, with messages being processed, routed, and delivered to subscribers based on the configured settings.

Role of AsyncAPI

AsyncAPI plays a pivotal role in the asynchronous API design and documentation. Its significance lies in standardization, providing a common and consistent framework for describing asynchronous APIs. AsyncAPI details crucial aspects such as message formats, channels, and protocols, enabling developers and stakeholders to understand and integrate with asynchronous systems effectively. 

It should also be noted that the AsyncAPI specification serves as more than documentation; it becomes a communication contract, ensuring clarity and consistency in the exchange of messages between different components or services. Furthermore, AsyncAPI facilitates code generation, expediting the development process by offering a starting point for implementing components that adhere to the specified communication patterns.

In essence, AsyncAPI helps bridge the gap between design-time decisions and the practical implementation and operation of systems that rely on asynchronous communication.

Bridging the gap

Let’s explore a scenario involving the development and consumption of an asynchronous API, coupled with a set of essential requirements:

  • Designing an asynchronous API in an event-driven architecture (EDA):
    • Define the events, schema, and publish/subscribe permissions of an EDA service
    • Expose the service as an asynchronous API
  • Generating AsyncAPI specification:
    • Use the AsyncAPI standard to generate a specification of the asynchronous API
  • Utilizing GitHub for storage and version control:
    • Check in the AsyncAPI specification into GitHub, leveraging it as both a storage system and a version control system
  • Configuring GitHub workflow for document review:
    • Set up a GitHub action designed to review pull requests (PRs) related to changes in the AsyncAPI document
      • If changes are detected, initiate a validation process
      • Upon a successful review and PR approval, proceed to merge the changes
      • Synchronize the updated API design with the design time

This workflow ensures that design-time and runtime components remain in sync consistently. The feasibility of this process is grounded in the use of the AsyncAPI for the API documentation. Additionally, the AsyncAPI tooling ecosystem supports validation and code generation that makes it possible to keep the design time and runtime in sync.

Putting the scenario into action

Let us consider Solace Event Portal as the tool for building an asynchronous API and Solace PubSub+ Broker as the messaging system. 

An event portal is a cloud-based event management tool that helps in designing EDAs. In the design phase, the portal facilitates the creation and definition of messaging structures, channels, and event-driven contracts. Leveraging the capabilities of Solace Event Portal, we model the asynchronous API and share the crucial details, such as message formats, topics, and communication patterns, as an AsyncAPI document.

We can further enhance this process by providing REST APIs that allow for the dynamic updating of design-time assets, including events, schemas, and permissions. GitHub actions are employed to import AsyncAPI documents and trigger updates to the design-time assets. 

The synchronization between design-time and runtime components is made possible by adopting AsyncAPI as the standard for documenting asynchronous APIs. The AsyncAPI tooling ecosystem, encompassing validation and code generation, plays a pivotal role in ensuring the seamless integration of changes. This workflow guarantees that any modifications to the AsyncAPI document efficiently translate into synchronized adjustments in both design-time and runtime aspects. 

Conclusion

Keeping the design time and runtime in sync is essential for a seamless and effective development lifecycle. When the design specifications closely align with the implemented runtime components, it promotes consistency, reliability, and predictability in the functioning of the system. 

The adoption of the AsyncAPI standard is instrumental in achieving a seamless integration between the design-time and runtime components of asynchronous APIs in EDAs. The use of AsyncAPI as the standard for documenting asynchronous APIs, along with its robust tooling ecosystem, ensures a cohesive development lifecycle. 

The effectiveness of this approach extends beyond specific tools, offering a versatile and scalable solution for building and maintaining asynchronous APIs in diverse architectural environments.

Author
Post contributed by Giri Venkatesan, Solace 

The post Bridging Design and Runtime Gaps: AsyncAPI in Event-Driven Architecture appeared first on Linux.com.

]]>
Achieving Log Centralization and Analysis with Open Source SIEM and XDR: UTMStack https://www.linux.com/contributed/achieving-log-centralization-and-analysis-with-open-source-siem-and-xdr-utmstack/ Fri, 19 Jan 2024 02:03:57 +0000 https://www.linux.com/?p=585708 Log centralization and analysis are crucial for organizations in troubleshooting system errors, identifying cybersecurity threats, and adhering to various regulations such as The Health Insurance Portability and Accountability Act (HIPAA), Gramm-Leach-Bliley Act (GLBA), Payment Card Industry Data Security Standards (PCI), Cybersecurity Maturity Model Certification (CMMC), and more. While contemporary SIEM solutions have simplified log management, […]

The post Achieving Log Centralization and Analysis with Open Source SIEM and XDR: UTMStack appeared first on Linux.com.

]]>
Log centralization and analysis are crucial for organizations in troubleshooting system errors, identifying cybersecurity threats, and adhering to various regulations such as The Health Insurance Portability and Accountability Act (HIPAA), Gramm-Leach-Bliley Act (GLBA), Payment Card Industry Data Security Standards (PCI), Cybersecurity Maturity Model Certification (CMMC), and more. While contemporary SIEM solutions have simplified log management, features like threat intelligence and advanced event correlation are often restricted to paid, closed-code systems. This article will walk you through deploying log collectors, a comprehensive log management solution, and correlation rules using UTMStack, an open source and free SIEM and XDR solution, for effective threat detection, system error identification, and automated remediation.

Technology and Architecture Overview

Deploying UTMStack for log centralization and analysis involves three main components: log collectors aka agents, a central server for log centralization, and correlation rules for detection and incident response.

Agents: These collect logs from systems and execute local and remote incident response commands. Agents can also function as proxies for collecting syslog and netflow logs from network devices.

Central Server: This server stores and correlates logs from various assets like other servers and firewalls to identify potential threats and orchestrates incident responses across the IT ecosystem.

Correlation rules and Incident Response: These detect possible threats to system security and availability by correlating logs from multiple systems with threat intelligence and predefined malicious sequences of behaviors and compromise indicators. Once a correlation rule evaluates a group of logs as potentially malicious, an alert triggers the incident response command.

Deploying the Open Source Security Stack

Log Centralization Server

The log centralization server can be deployed using an ISO image from the utmstack website for simplicity. For advanced installation options, please visit the official GitHub repository https://github.com/utmstack/UTMStack

Here are the instructions for installing without the ISO on Ubuntu Linux 22.04 LTS.

After installation, access the server via a browser using the server’s IP address or DNS name and the random secure password provided by the installer.

Deploying Log Collectors

Navigate to the “Integrations” section and select the appropriate agent for your operating system. Additional integrations can be configured as needed.

Defining Correlation Rules and Incident Response

Correlation rules form the core of a log management system, defining which logs or combinations thereof should trigger an alert or incident. UTMStack uses these rules as a basis for Incident Response playbooks.

Let’s take, for instance, a brute-force attack. This type of cybersecurity threat attempts to guess a user’s password by trying massive random combinations of characters until the correct sequence matches the user’s credentials. These types of attacks usually leave behind a trail of logs that indicate a user has failed to log into a system several times in a short period of time.

You can access the complete list of prebuilt correlation rules and the guide to creating new ones from the official UTMStack repository. For this guide, we’ll create a sample correlation rule to detect brute-force attacks.

UTMStack correlation rules are written in plain YAML and have three main components. Threat documentation that describes the rule, defines a tactic category of attack, severity and name of the rule. The second component is the logic and frequency block, where the rules for triggering this alert are defined. Finally, the alert information block, where the information is extracted from the logs and saved into the alert item.

These YAML rules can be saved as text files and copied into the correlation rules folder via the Web User interface. Any rules uploaded there will be processed by the correlation engine automatically.

All logs the system receives are aggregated and correlated for indicators of compromise (IOCs) using several open threat intelligence feeds. This feature is enabled by default, and there is no need for custom correlation rules or configurations.

Finally, to deploy the incident response playbooks, navigate to the incident response automation section and drop a command to disable future login attempts from the offender host. This can be done by blocking its IP in the firewall or disabling the victim user until further investigation can be done.

UTMStack’s Incident response commands use dynamic variables to handle the execution of commands with different targets. Here are some examples.

Command to block a user:
usermod -L ${source.user}

Command to block an IP
iptables -A INPUT -s ${source.ip} -j DROP

Summary

Log centralization and analysis are essential for security, availability, and compliance. Open source tools can deliver advanced flexibility and rich feature sets to meet complex use cases and deliver an enterprise-ready experience. The UTMStack open source project is a powerful SIEM and XDR system that can deliver log management, threat detection and incident response by correlating and aggregating logs in real-time. Advanced features such as IOC detection, threat intelligence, and compliance are built-in features of the security stack.

Join Our Community and Contribute

We’re always looking for passionate individuals to contribute to our project. Whether you’re a developer, security expert, or just enthusiastic about cybersecurity, your input is valuable. Here’s how you can get involved:

GitHub Repository: Visit our GitHub repository to explore our code, submit issues, or contribute enhancements. Your code contributions can help us improve and expand UTMStack’s capabilities.

Discord Channel: Join our Discord community to discuss with fellow contributors, share ideas, and collaborate on projects. It’s a great place to learn from others and contribute your expertise.

Online Chat and Forums: For quick questions or discussions, use the online chat feature on our official website or the forums. It’s a direct line to our team and community for real-time interactions.

Your contributions, big or small, play a crucial part in the development and improvement of UTMStack. Together, we can build a stronger, more secure open-source SIEM & XDR solution. Join us today and help shape the future of cybersecurity!

Author


Rick Valdes
Founder, UTMStack

The post Achieving Log Centralization and Analysis with Open Source SIEM and XDR: UTMStack appeared first on Linux.com.

]]>
Linux Foundation Newsletter: December 2023 https://www.linux.com/news/linux-foundation-newsletter-december-2023/ Fri, 22 Dec 2023 04:54:10 +0000 https://www.linux.com/?p=585697 Welcome to the Linux Foundation’s December newsletter! In this edition, we cover the many gatherings that took place across the globe, notably at Open Source Summit Japan, AI.dev in San Jose, CA., and for several Linux Foundation project teams, at COP28 in Dubai. This month also saw the publication of our 2023 Annual Report, “Rising […]

The post Linux Foundation Newsletter: December 2023 appeared first on Linux.com.

]]>
Welcome to the Linux Foundation’s December newsletter! In this edition, we cover the many gatherings that took place across the globe, notably at Open Source Summit Japan, AI.dev in San Jose, CA., and for several Linux Foundation project teams, at COP28 in Dubai. This month also saw the publication of our 2023 Annual Report, “Rising Tides of Open Source,” our most comprehensive publication of the year, as well as the launch of the LF Management and Best Practices initiative, new research reports and surveys, and the announcement of new projects.  We’re also excited to share our final Training & Certification deals of the year! 

Read more at linuxfoundation.org

The post Linux Foundation Newsletter: December 2023 appeared first on Linux.com.

]]>
Linux Foundation Newsletter: September 2023 https://www.linux.com/news/linux-foundation-newsletter-september-2023/ Thu, 14 Sep 2023 23:00:00 +0000 https://www.linux.com/news/linux-foundation-newsletter-september-2023/ Welcome to the September edition of the Linux Foundation newsletter! This month we have an announcement about a new vulnerability disclosure policy, new research published about sustainability in open source projects, news from WASMCon, and what you can expect at the upcoming Open Source Summit Europe. Don't miss our exclusive training and certification discounts, and stay updated with the latest news from our Linux Foundation projects.

The post Linux Foundation Newsletter: September 2023 appeared first on Linux.com.

]]>
Read the original blog at Read More 

The post Linux Foundation Newsletter: September 2023 appeared first on Linux.com.

]]>
Linux Foundation Newsletter: August 2023 https://www.linux.com/news/linux-foundation-newsletter-august-2023/ Thu, 17 Aug 2023 07:00:00 +0000 https://www.linux.com/?p=585574 Welcome to the August edition of the Linux Foundation Newsletter! We have an important announcement for the 3D graphics and virtual world industry: the launch of the Alliance for OpenUSD (AOUSD). Plus, don't miss our exclusive training and certification discounts, and stay updated with the latest news from our Linux Foundation projects. 

The post Linux Foundation Newsletter: August 2023 appeared first on Linux.com.

]]>
Read the original blog at The Linux Foundation 

The post Linux Foundation Newsletter: August 2023 appeared first on Linux.com.

]]>
Linux Foundation Newsletter: July 2023 https://www.linux.com/news/linux-foundation-newsletter-july-2023/ Thu, 20 Jul 2023 13:27:10 +0000 https://www.linux.com/?p=585525 Welcome to the July edition of the Linux Foundation Newsletter! We have exciting announcements this month, including the launch of the Ultra Ethernet Consortium and two new reports on maintainers and standards from LF Research. Plus, don't miss the Embedded Open Source Summit highlights and exclusive training and certification discounts. As always, stay updated with the latest news from our Linux Foundation projects. 

The post Linux Foundation Newsletter: July 2023 appeared first on Linux.com.

]]>
Read the original blog at Read More 

The post Linux Foundation Newsletter: July 2023 appeared first on Linux.com.

]]>
Linux Foundation Newsletter: June 2023 https://www.linux.com/news/linux-foundation-newsletter-june-2023/ Wed, 14 Jun 2023 14:27:27 +0000 https://www.linux.com/news/linux-foundation-newsletter-june-2023/ The Linux Foundation celebrates our diverse and inclusive LGBTQIA+ community.
In this month's newsletter, we take a stand against patent trolls. Discover the RISE project hosted by LF Europe, and stay current with the latest research on open source microgrids and energy transformation readiness. Additionally, explore our new sustainability and digital trust initiatives for a more secure and trustworthy digital future. Be sure to check out our training and certification deals!

The post Linux Foundation Newsletter: June 2023 appeared first on Linux.com.

]]>
Read the original blog at The Linux Foundation 

The post Linux Foundation Newsletter: June 2023 appeared first on Linux.com.

]]>
Linux Foundation Newsletter: May 2023 https://www.linux.com/news/linux-foundation-newsletter-may-2023/ Thu, 18 May 2023 14:35:32 +0000 https://www.linux.com/news/linux-foundation-newsletter-may-2023/ In this edition, catch up on the new LF Connectivity project, Open Source Summit North America announcements, the 2023 Tech Talent Report, four new LF Research surveys, LFX Mentorship news, and more.

The post Linux Foundation Newsletter: May 2023 appeared first on Linux.com.

]]>
Read the original blog at the Linux Foundation 

The post Linux Foundation Newsletter: May 2023 appeared first on Linux.com.

]]>