<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts on SMJED</title><link>https://smjed.net/posts/</link><description>Recent content in Posts on SMJED</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>&lt;a href="https://creativecommons.org/licenses/by-nc/4.0/" target="_blank" rel="noopener">CC BY-NC 4.0&lt;/a></copyright><lastBuildDate>Mon, 09 Jun 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://smjed.net/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>Building SSHplex: A Modern TUI for SSH Connection Multiplexing</title><link>https://smjed.net/posts/2025/06/building-sshplex-a-modern-tui-for-ssh-connection-multiplexing/</link><pubDate>Mon, 09 Jun 2025 00:00:00 +0000</pubDate><guid>https://smjed.net/posts/2025/06/building-sshplex-a-modern-tui-for-ssh-connection-multiplexing/</guid><description>The Problem At Kindred, we relied on Remote Desktop Manager (RDM) to manage connections to our Windows and Linux hosts for broadcasting commands and checking system states. However, licensing costs were high and every new host required manual database entry. After finding no suitable alternatives, I decided to build my own solution.
Solution Design SSHplex needed three core capabilities: a modern terminal UI with host selection and bulk operations, flexible data source integration (NetBox and Ansible inventory), and terminal multiplexer support with session persistence for background tasks.</description><content type="html"><![CDATA[<p><img alt="SSHplex Session Manager" src="/images/sshplex-session-manager.png"></p>
<h2 id="the-problem">The Problem</h2>
<p>At Kindred, we relied on Remote Desktop Manager (RDM) to manage connections to our Windows and Linux hosts for broadcasting commands and checking system states. However, licensing costs were high and every new host required manual database entry. After finding no suitable alternatives, I decided to build my own solution.</p>
<h2 id="solution-design">Solution Design</h2>
<p>SSHplex needed three core capabilities: a modern terminal UI with host selection and bulk operations, flexible data source integration (NetBox and Ansible inventory), and terminal multiplexer support with session persistence for background tasks.</p>
<h2 id="development-approach">Development Approach</h2>
<p>Following KISS principles, I structured development in three phases. First, building the foundation with configuration, NetBox connectivity, basic TUI, and single SSH connections. Second, adding multi-select functionality, tmux session management, and error handling. Finally, implementing command broadcasting, session persistence, and performance optimization.</p>
<p>The modular architecture allowed rapid iteration while maintaining clean separation between UI, data sources, and multiplexer logic. I implemented CI/CD early with two pipelines: PR-triggered testing and linting, plus tag-based releases to GitHub and PyPI with automated changelog generation.</p>
<h2 id="implementation-strategy">Implementation Strategy</h2>
<p>I leveraged modern development practices with comprehensive testing using pytest, automated quality checks with flake8 and mypy, and semantic versioning. The pipeline caught integration issues early and reduced manual overhead significantly.</p>
<h2 id="leveraging-large-language-models">Leveraging Large Language Models</h2>
<p>My experience highlighted the importance of choosing the right AI tool. Initially using Claude Sonnet 3.5, I encountered over-engineering and inconsistent results. Switching to Claude Sonnet 4 in VS Code transformed the experience with precise execution and better context understanding.</p>
<p>Rather than random suggestions, I used structured prompts like: &ldquo;Create a NetBox API client class with connection pooling, automatic retry logic with exponential backoff, proper SSL certificate handling, and comprehensive error handling for device and VM queries.&rdquo;</p>
<p>The LLM excelled at boilerplate elimination, API integration patterns, test coverage generation, and maintaining consistent code patterns. I treated it as a sophisticated pair programmer where architecture decisions remained human-driven while leveraging AI efficiency for implementation details.</p>
<p>This collaboration increased development velocity by approximately 40% while maintaining code quality. The key was keeping human judgment central to design decisions while using AI as an advanced autocomplete tool for consistent implementation patterns.</p>
<p>The result is a robust tool that addresses real infrastructure management challenges through methodical development and strategic AI assistance.</p>
<hr>
<p><em>For more detailed technical information about the development process, AI collaboration strategies, and implementation specifics, see the <a href="/posts/2025/06/building-sshplex-more-details/">complete article</a>.</em></p>
]]></content></item><item><title>Building SSHplex: More details</title><link>https://smjed.net/posts/2025/06/building-sshplex-more-details/</link><pubDate>Mon, 09 Jun 2025 00:00:00 +0000</pubDate><guid>https://smjed.net/posts/2025/06/building-sshplex-more-details/</guid><description>The Problem At Kindred, we relied on Remote Desktop Manager (RDM) to manage connections to our Windows and Linux hosts. I primarily used it to connect to multiple VMs simultaneously and broadcast commands to check system states or run quick commands where Ansible ad-hoc was either too slow or when I needed immediate feedback.
However, we faced two major issues:
Licensing costs: The license was expiring and renewal was expensive Maintenance overhead: Every new host had to be manually added to the RDM SQL Server database After searching for alternatives, I found nothing that met our specific needs.</description><content type="html"><![CDATA[<p><img alt="SSHplex Session Manager" src="/images/sshplex-session-manager.png"></p>
<h2 id="the-problem">The Problem</h2>
<p>At Kindred, we relied on Remote Desktop Manager (RDM) to manage connections to our Windows and Linux hosts. I primarily used it to connect to multiple VMs simultaneously and broadcast commands to check system states or run quick commands where Ansible ad-hoc was either too slow or when I needed immediate feedback.</p>
<p>However, we faced two major issues:</p>
<ul>
<li><strong>Licensing costs</strong>: The license was expiring and renewal was expensive</li>
<li><strong>Maintenance overhead</strong>: Every new host had to be manually added to the RDM SQL Server database</li>
</ul>
<p>After searching for alternatives, I found nothing that met our specific needs. So I decided to build my own solution.</p>
<h2 id="solution-design">Solution Design</h2>
<p>I identified three core requirements for the new tool:</p>
<h3 id="-modern-terminal-user-interface">🖥️ Modern Terminal User Interface</h3>
<ul>
<li><strong>Host selection interface</strong> with search capabilities</li>
<li><strong>Bulk selection</strong> with hotkeys (like pressing &lsquo;A&rsquo; to select all)</li>
<li><strong>Command broadcasting</strong> across multiple SSH sessions</li>
</ul>
<h3 id="-flexible-source-of-truth-integration">🔗 Flexible Source of Truth Integration</h3>
<ul>
<li><strong>NetBox integration</strong> for VMs and devices</li>
<li><strong>Ansible inventory support</strong> with merging capabilities from multiple files</li>
<li><strong>Extensible architecture</strong> for future data sources</li>
</ul>
<h3 id="-terminal-multiplexer-support">🖼️ Terminal Multiplexer Support</h3>
<ul>
<li><strong>tmux integration</strong> as the initial multiplexer (widely available with excellent Python libraries)</li>
<li><strong>Extensible design</strong> for future multiplexer support</li>
<li><strong>Session Keepalived</strong> sometimes i close my terminal but i want my background task to work and comming back to it</li>
</ul>
<h2 id="development-approach">Development Approach</h2>
<p>I followed the KISS (Keep It Simple, Stupid) principle and broke development into three phases:</p>
<h3 id="phase-1-foundation">Phase 1: Foundation</h3>
<ol>
<li><strong>Configuration system</strong> with proper validation</li>
<li><strong>NetBox connectivity</strong> and VM listing</li>
<li><strong>Basic TUI</strong> for host selection</li>
<li><strong>Single SSH connection</strong> functionality</li>
<li><strong>Logging infrastructure</strong> with proper namespacing</li>
</ol>
<h3 id="phase-2-core-features">Phase 2: Core Features</h3>
<ol>
<li><strong>Multi-select capability</strong> in the TUI</li>
<li><strong>tmux session management</strong> with multiple panes</li>
<li><strong>Connection error handling</strong> and retry logic</li>
<li><strong>Search and filtering</strong> functionality</li>
</ol>
<h3 id="phase-3-polish--performance">Phase 3: Polish &amp; Performance</h3>
<ol>
<li><strong>Command broadcasting</strong> between panes</li>
<li><strong>Session persistence</strong> and management</li>
<li><strong>Advanced error recovery</strong> mechanisms</li>
<li><strong>Performance optimization</strong> with intelligent caching</li>
</ol>
<h3 id="cicd">CI/CD</h3>
<p>After having a first version i quickly needed a pipeline to test/lint my code and deploy it easily into github and pypi
I created 2 pipeline:</p>
<ol>
<li>The first one was a ci based on PR trigger, to check for linting and basic test (demo mode in my app)</li>
<li>The seccond one was for building and releasing the app based on tags, it uploads to github releases with a description based on commits and also release it to pypi for pip install</li>
</ol>
<h2 id="implementation-strategy">Implementation Strategy</h2>
<p>I leveraged GitHub Copilot extensively during development, using structured prompts to maintain consistency and follow best practices. The modular approach allowed me to:</p>
<ul>
<li><strong>Iterate quickly</strong> on each component</li>
<li><strong>Test incrementally</strong> as features were added</li>
<li><strong>Maintain clean separation</strong> between UI, data sources, and multiplexer logic</li>
<li><strong>Plan for extensibility</strong> from the start</li>
</ul>
<p>This methodical approach ensured that SSHplex evolved from a simple concept into a robust tool that addresses real infrastructure management challenges.</p>
<h2 id="cicd-pipeline">CI/CD Pipeline</h2>
<p>Building a reliable CI/CD pipeline was crucial for maintaining code quality and enabling rapid iteration:</p>
<h3 id="-automated-testing--quality">🔄 Automated Testing &amp; Quality</h3>
<ul>
<li><strong>GitHub Actions</strong> for continuous integration</li>
<li><strong>Pytest</strong> with comprehensive test coverage for core functionality</li>
<li><strong>Code quality checks</strong> using flake8 and mypi for consistent formatting</li>
</ul>
<h3 id="-package-distribution">📦 Package Distribution</h3>
<ul>
<li><strong>PyPI publishing</strong> with automated versioning and release notes</li>
<li><strong>Multi-version testing</strong> across python versions</li>
</ul>
<h3 id="-release-strategy">🚀 Release Strategy</h3>
<ul>
<li><strong>Semantic versioning</strong> with automated changelog generation</li>
</ul>
<p>The pipeline reduced manual overhead significantly and caught integration issues early, allowing me to focus on feature development rather than release management.</p>
<h2 id="leveraging-large-language-models">Leveraging Large Language Models</h2>
<p>My experience with LLMs during this project highlighted the importance of choosing the right tool for the task:</p>
<h3 id="-evolution-of-ai-assistance">🔄 Evolution of AI Assistance</h3>
<p>Initially, I started with <strong>Claude Sonnet 3.5</strong>, but encountered several limitations:</p>
<ul>
<li><strong>Over-engineering</strong>: Often generated unnecessarily complex solutions</li>
<li><strong>Context misunderstanding</strong>: Frequently missed the specific requirements or intent</li>
<li><strong>Inconsistent results</strong>: Code quality varied significantly between iterations</li>
</ul>
<p>When Anthropic announced <strong>Claude Sonnet 4</strong> in preview (available in VS Code), the experience transformed completely:</p>
<ul>
<li><strong>Precise execution</strong>: With clear instructions, it consistently delivered working Python code</li>
<li><strong>Agent mode debugging</strong>: Exceptional at identifying and fixing issues autonomously</li>
<li><strong>Intent understanding</strong>: Better grasp of project context and requirements</li>
</ul>
<h3 id="-strategic-implementation-approach">🎯 Strategic Implementation Approach</h3>
<p>Rather than relying on random code suggestions, I developed a structured workflow:</p>
<p><strong>Detailed Instruction Sets:</strong></p>
<pre tabindex="0"><code>&#34;Create a NetBox API client class with connection pooling, automatic retry logic
with exponential backoff, proper SSL certificate handling, and comprehensive
error handling for device and VM queries. Include logging and timeout management.&#34;
</code></pre><p><strong>Iterative Refinement:</strong></p>
<ul>
<li>Start with high-level architecture prompts</li>
<li>Break down complex features into smaller, specific tasks</li>
<li>Use the agent mode for debugging and optimization passes</li>
</ul>
<h3 id="-key-success-areas">💡 Key Success Areas</h3>
<p>The LLM excelled particularly in:</p>
<ul>
<li><strong>Boilerplate elimination</strong> - Configuration parsing and validation schemas</li>
<li><strong>API integration patterns</strong> - Consistent error handling across different data sources</li>
<li><strong>Test coverage generation</strong> - Comprehensive unit test scaffolding</li>
<li><strong>Code refactoring</strong> - Maintaining consistency across modules during iterations</li>
</ul>
<h3 id="-human-ai-collaboration-balance">⚖️ Human-AI Collaboration Balance</h3>
<p>The most effective approach was treating the LLM as a highly skilled pair programmer:</p>
<ul>
<li><strong>Architecture decisions</strong> remained human-driven</li>
<li><strong>Implementation details</strong> leveraged AI efficiency</li>
<li><strong>Domain expertise</strong> guided prompt engineering and validation</li>
<li><strong>Code review</strong> ensured real-world applicability</li>
</ul>
<p>This collaboration model increased development velocity by approximately 40% while maintaining code quality and architectural integrity.</p>
<h3 id="-code-generation--boilerplate">💡 Code Generation &amp; Boilerplate</h3>
<ul>
<li><strong>Configuration parsing</strong> - Copilot excelled at generating YAML validation schemas</li>
<li><strong>API integration code</strong> - Particularly helpful for NetBox API client implementation</li>
<li><strong>Error handling patterns</strong> - Consistent exception handling across modules</li>
<li><strong>Test case generation</strong> - Automated creation of unit test scaffolding</li>
</ul>
<h3 id="-structured-prompting-strategy">🎯 Structured Prompting Strategy</h3>
<p>Instead of random suggestions, I used specific prompts:</p>
<pre tabindex="0"><code>&#34;Generate a NetBox API client class with connection pooling,
retry logic, and proper error handling for device queries&#34;
</code></pre><pre tabindex="0"><code>&#34;Create a tmux session manager that handles pane creation,
layout management, and graceful cleanup on session termination&#34;
</code></pre><h3 id="-human-oversight--validation">⚖️ Human Oversight &amp; Validation</h3>
<ul>
<li><strong>Code review</strong> - All generated code underwent manual review</li>
<li><strong>Architecture decisions</strong> - LLM suggestions informed but didn&rsquo;t dictate design choices</li>
<li><strong>Domain expertise</strong> - Combined AI efficiency with infrastructure knowledge</li>
<li><strong>Testing validation</strong> - Ensured generated code met real-world requirements</li>
</ul>
<h3 id="-productivity-impact">📈 Productivity Impact</h3>
<p>Using LLMs strategically increased development velocity by approximately 30-40%, particularly in:</p>
<ul>
<li><strong>Reducing boilerplate writing time</strong></li>
<li><strong>Accelerating API integration development</strong></li>
<li><strong>Generating comprehensive test coverage</strong></li>
<li><strong>Maintaining consistent code patterns</strong></li>
</ul>
<p>The key was treating Copilot as a sophisticated autocomplete rather than an architect - keeping human judgment central to design decisions while leveraging AI for implementation efficiency.</p>
]]></content></item><item><title>AI Transformed My Journey as a System Engineer: Developing a Terraform Provider for Centreon</title><link>https://smjed.net/posts/2025/02/ai-transformed-my-journey-as-a-system-engineer-developing-a-terraform-provider-for-centreon/</link><pubDate>Tue, 25 Feb 2025 00:00:00 +0000</pubDate><guid>https://smjed.net/posts/2025/02/ai-transformed-my-journey-as-a-system-engineer-developing-a-terraform-provider-for-centreon/</guid><description>As a day-to-day Terraform user with a decent foundation in Python, I never imagined that developing a Terraform provider would significantly impact my system engineering skills. Yet, leveraging AI tools enabled me to build a provider for Centreon API V2 and step into the Go ecosystem—an essential leap for my work at Kindred.
Overview For years, there was a significant gap in available tools: the only existing Centreon Terraform provider was built around the legacy CLAPI, which had not been updated in over five years.</description><content type="html"><![CDATA[<p>As a day-to-day Terraform user with a decent foundation in Python, I never imagined that developing a Terraform provider would significantly impact my system engineering skills. Yet, leveraging AI tools enabled me to build a provider for Centreon API V2 and step into the Go ecosystem—an essential leap for my work at Kindred.</p>
<h2 id="overview">Overview</h2>
<p>For years, there was a significant gap in available tools: the only existing Centreon Terraform provider was built around the legacy CLAPI, which had not been updated in over five years. While there was also a V1 (distinct from CLAPI), it lacked the features needed for modern infrastructure management. My need for an up-to-date solution at Kindred pushed me to create a new provider based on the latest Centreon API V2, ensuring future-proof functionality and seamless integration with current workflows.</p>
<h2 id="ai-powered-development-workflow">AI-Powered Development Workflow</h2>
<p>The AI-driven process was central to my success. Here’s how I leveraged it:</p>
<ul>
<li><strong>Using OpenAPI Documentation as Context:</strong> I provided the Centreon API documentation to the AI, which generated integration code complete with logging, unit tests, and robust error handling.</li>
<li><strong>GitHub Copilot &amp; Insider Agent Mode:</strong> Enhanced by the latest insider agent mode and the Claude Sonnet 3.5 model, GitHub Copilot in VSCode helped me interactively ask questions about code segments. This allowed me to understand the generated code, manage follow-up queries effectively, and reframe prompts when needed.</li>
<li><strong>Iterative Testing:</strong> I continuously asked the AI for explanations and tested the code, refining it until it met the necessary standards. This “managing the AI as a worker” approach was a game changer, providing clarity and a much-needed boost in confidence.</li>
</ul>
<h2 id="tools-and-technologies">Tools and Technologies</h2>
<p>This project integrated a variety of tools to create a solid foundation for developing the provider:</p>
<ul>
<li><strong>GitHub &amp; GitHub Actions:</strong> Handled source control, CI/CD workflows, and automated testing.</li>
<li><strong>Terraform Provider SDK V2:</strong> Served as the framework for building provider-specific functionalities.</li>
<li><strong>Go:</strong> Became my primary language, with AI assistance bridging the gap from my Python background.</li>
<li><strong>Linters &amp; Makefiles:</strong> Ensured code quality and streamlined the build process.</li>
<li><strong>Unit Tests:</strong> Played a critical role in ensuring the reliability and maintainability of the provider.</li>
<li><strong>Tags and Releases:</strong> Automated versioning and release management to maintain a clear project history.</li>
<li><strong>Documentation &amp; Contributing Workflow:</strong> Established guidelines for external contributions and issue management.</li>
</ul>
<p>I used the <a href="https://github.com/hashicorp/terraform-provider-scaffolding-framework">Terraform Provider Scaffolding Framework</a> as my base repository for CI/CD, among other practices. However, I intentionally skipped the copyright tool offered by HashiCorp—since it would transfer code ownership to HashiCorp—to keep the project completely open source.</p>
<h2 id="challenges-and-lessons-learned">Challenges and Lessons Learned</h2>
<p>The journey was not without its challenges:</p>
<ul>
<li><strong>Prompt Engineering:</strong> Crafting the right prompts was key. At times, follow-up questions led to context drift, requiring me to reset the conversation and guide the AI back on track.</li>
<li><strong>Managing Context:</strong> A larger context improved code coherence but sometimes resulted in an overload of follow-up queries that needed to be pruned.</li>
<li><strong>Tool Limitations:</strong> While GitHub Copilot in VSCode was a powerful assistant, occasional misalignments or unexpected bugs required me to intervene and adjust the AI&rsquo;s output.</li>
</ul>
<p>Each challenge became a valuable lesson, improving the quality of the provider and deepening my technical expertise in Go and CI/CD integrations.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Developing a Terraform provider for Centreon API V2 using AI was a practical, hands-on experience that significantly expanded my technical skill set. By combining modern AI tools with best practices and robust testing methodologies, I was able to fill a long-standing gap in the Centreon ecosystem and produce a provider that meets modern standards. This project not only improved my proficiency in Go and open source development but also reinforced the value of AI as an essential tool for system engineers.</p>
<hr>
<p><em>Originally published on the HuGO blog.</em></p>
]]></content></item><item><title>Hello World</title><link>https://smjed.net/posts/2025/02/hello-world/</link><pubDate>Mon, 24 Feb 2025 00:00:00 +0000</pubDate><guid>https://smjed.net/posts/2025/02/hello-world/</guid><description>Welcome to my blog! I&amp;rsquo;m a French systems engineer with a long-standing passion for systems, security, and networking that dates back to my younger years. What started as curiosity has evolved into a fulfilling career and continuous learning journey.
About Me I&amp;rsquo;ve built my career around understanding and implementing robust system architectures, but I believe there&amp;rsquo;s always room to grow. Recently, I&amp;rsquo;ve been diving deeper into programming with a particular focus on Go and Python.</description><content type="html"><![CDATA[<p>Welcome to my blog! I&rsquo;m a French systems engineer with a long-standing passion for systems, security, and networking that dates back to my younger years. What started as curiosity has evolved into a fulfilling career and continuous learning journey.</p>
<h2 id="about-me">About Me</h2>
<p>I&rsquo;ve built my career around understanding and implementing robust system architectures, but I believe there&rsquo;s always room to grow. Recently, I&rsquo;ve been diving deeper into programming with a particular focus on Go and Python. Despite being what some might call a &ldquo;late learner&rdquo; in the programming world, I&rsquo;m determined to master these skills to complement my systems expertise.</p>
<h2 id="what-to-expect">What to Expect</h2>
<p>This blog will serve as a chronicle of my experiences and projects. I plan to share:</p>
<ul>
<li>Technical tutorials and walkthroughs</li>
<li>Insights from my professional journey</li>
<li>Personal projects combining systems engineering and programming</li>
<li>Lessons learned along the way</li>
</ul>
<p>I believe in the power of sharing knowledge, and I hope that documenting my journey might help others with similar interests or career paths.</p>
<p>So here we go! The beginning of what I hope will be an insightful collection of posts for both myself and anyone who stumbles upon this corner of the internet.</p>
<p>Stay tuned for more!</p>
]]></content></item></channel></rss>