Greetings, lovers of AI! I'm thrilled to unveil my new explanatory video that demystifies the initial setup with OpenAI. This detailed walk-through is constructed to make your AI exploration seamless and exciting. It doesn't matter if you’re a seasoned developer or an intrigued novice, this tutorial is engineered to offer precious insights and actionable instructions.
The video explores:
Cutting-edge Tech Integration:
A thrilling component of this guide is the incorporation of Professional Voice Imitation technology by ElevenLabs. This advanced instrument has helped me produce a more compelling and lively narration for the video. The use of voice imitation doesn't just boost the audibility and caliber of the audio, but also exemplifies the remarkable capabilities of AI in revolutionizing content generation.
Continue reading "Python Mastery From Basics to Brilliance - How-to Video on Configuring an OpenAI API Key"]]>In this online training course, you'll dive deep into the world of VMware Cloud solutions. You'll get to know all the important bits and bobs and learn how to smoothly connect and move stuff to VMware Cloud across different big players in the cloud game. By the end of it all, you'll be a pro at spotting and fixing any pesky cloud issues within your organization and even outside, making you the star player in a super competitive business world.
This learning path sets you up for the VMware Certified Professional – VMware Cloud 2024 exam. You'll get the lowdown on storage, networking, security, and all that jazz, boosting your skills in keeping things running smoothly and dealing with problems in a VMware Cloud setup.
]]>This is a speech sample of my biography as featured on the about page: ElevenLabs_2024-02-15T10_56_44_Eric_Sloof_pvc_s50_sb75_t2.mp3
In both cloning methods, the clarity and quality of the audio are crucial. Consistent volume and minimal background noise lead to better results. It's also important to remember that cloned voices retain the accent of the original sample when speaking other languages.
ElevenLabs also offers a Python module, allowing the programmatic generation of speech.
]]>1. The Heart of Matter: Home Pod Mini and Home Pod 2
The journey into the Matter-enabled world begins with the Home Pod Mini and Home Pod 2. These devices are not just speakers; they are the central hubs that bring the smart home ecosystem to life. With Matter compatibility, these devices ensure seamless integration and interoperability with a wide range of smart home gadgets.
2. Sonoff's Innovation: The SONOFF MINI Extreme
A notable mention in the Matter-compatible device list is the SONOFF MINI Extreme Wi-Fi Smart Switch. This compact device allows users to control their home appliances remotely, adding convenience and efficiency to everyday tasks.
3. Eve: Simplifying Smart Home Automation
Eve's range of Matter-compatible devices further simplifies home automation. Although details on specific products are not outlined, Eve's reputation in the smart home market suggests an exciting range of Matter-enabled devices.
4. Raspberry Pi: The DIY Approach to Matter
The Raspberry Pi offers a more hands-on approach to Matter. From building a Matter home automation service to running the Matter Demo over Wi-Fi, Raspberry Pi serves as a versatile platform for developers and enthusiasts alike. It highlights projects like the Open Thread Border Router, using the Mattertool, and building a Matter Hub using Raspberry Pi.
5. Expanding Horizons with Matter on Ubuntu
Matter's flexibility extends to different operating systems, with Ubuntu emerging as a strong platform for developing Matter-compatible devices. This opens avenues for reducing development costs and encourages innovation in open-source smart home solutions.
6. Node-RED and Home Assistant: User-Friendly Interfaces
Node-RED Matter plugin and Home Assistant's beta for Matter introduce user-friendly interfaces for managing and automating smart home devices. These platforms make it easier for non-technical users to enjoy the benefits of Matter.
7. Silicon Labs and Home Bridge: Expanding the Ecosystem
Silicon Labs' Developer Documentation and products like the SLEXP8023A and WFM200S Wi-Fi Expansion Kit demonstrate the growing hardware support for Matter. Home Bridge's discussion on Raspberry Pi 5 support further indicates the expanding ecosystem.
8. IFTTT and IOS Shortcuts: Simplifying Automation
Finally, IFTTT and IOS Shortcuts represent the bridge between Matter devices and wider automation possibilities, allowing users to create custom scenarios and control their smart homes in unique ways.
Conclusion
Matter is not just a standard; it's a gateway to a more unified, secure, and user-friendly smart home experience. As we witness more devices and platforms embracing Matter, the dream of a truly interconnected smart home is becoming a reality. Stay tuned as this exciting technology continues to evolve, promising a smarter, more connected future for us all.
You can get a copy of the mind-map here.
Continue reading "Exploring the World of Matter: The Future of Smart Home Technology"]]>Jetson Nano: The Compact AI Powerhouse
The Jetson Nano is an affordable, yet powerful option. Products like the Kiwi reComputer J3010, featuring the NVIDIA Jetson Orin Nano 4GB module, are perfect for AI projects needing compactness without sacrificing performance. Also notable is the NVIDIA Jetson AGX Xavier Developer Kit, which, when paired with accessories like the Logitech C270 HD Webcam, becomes a robust tool for advanced AI applications.
The Jetson Nano Developer Kits, such as the 945-13450-0000-100 model, offer enthusiasts and professionals alike the opportunity to dive into AI development. NVIDIA also offers free courses through its Deep Learning Institute, like "Getting Started with AI on Jetson Nano," providing an excellent entry point for AI enthusiasts.
Raspberry Pi: Democratizing AI
The Raspberry Pi has been a revelation in making technology accessible. Particularly, its application in AI through machine learning is remarkable.
With comprehensive guides like "The Complete Guide to Setting Up Your Raspberry Pi 3 2023 (Mac)," enthusiasts can embark on their AI journey. TensorFlow Lite's integration with Raspberry Pi, especially for object detection, opens up numerous possibilities for AI applications.
The launch of the Raspberry Pi Camera 3, which can be seamlessly integrated with AI frameworks, further empowers developers to create sophisticated AI-powered applications.
Other Noteworthy AI Platforms
Google Coral: An Emerging Competitor
Google's Coral platform, though not as detailed in the provided structure, is another significant player in the AI hardware space, known for its speed and efficiency in running AI models.
Brainy Pi: A Cost-effective Alternative
For those looking for a more cost-effective alternative to the Jetson Nano, Brainy Pi presents a viable option. It's designed to run AI applications efficiently at lower costs, as detailed in resources like "Brainy Pi: A Jetson Nano Alternative to Run AI Applications at Lower Cost."
Conclusion
AI is rapidly evolving, with platforms like NVIDIA's Jetson, Raspberry Pi, Google Coral, and Brainy Pi leading the charge. Each offers unique capabilities and opportunities for both beginners and seasoned professionals in AI. As AI continues to integrate into various aspects of technology, these platforms will play a pivotal role in shaping the future of AI applications.
Download the mind-map, including all the links.
Continue reading "Navigating the World of Artificial Intelligence: A Guide to the Latest AI Technologies"]]>UTM is a game-changer for Apple Silicon users. It uses Apple's Hypervisor virtualization framework, allowing ARM64 operating systems to run at speeds almost identical to native performance. For those with Intel Macs, there's the ability to virtualize x86/x64 operating systems. And it doesn’t stop there - UTM also offers lower performance emulation for running x86/x64 on Apple Silicon and ARM64 on Intel. This versatility extends to supporting a range of other emulated processors, including ARM32, MIPS, PPC, and RISC-V, making your Mac a truly universal platform.
UTM isn’t just about the new and shiny; it also breathes life into classic operating systems. Whether it's an old PowerPC, SPARC, or x86_64 system, UTM lets you revisit the digital past. Curious users can explore a gallery showcasing various systems that UTM can emulate.
Apple Silicon Mac users can take virtualization a step further with UTM. It allows the running of multiple instances of macOS, which is particularly beneficial for developers and security-focused users. This feature, however, is exclusive to ARM-based Macs running macOS Monterey or higher.
UTM stands out from other virtualization software due to its design, which is exclusively for macOS and Apple platforms. It seamlessly blends with the Big Sur style, offering a user experience that feels genuinely Mac-like, complete with all the expected privacy and security features.
At its core, UTM harnesses the power of QEMU, a well-established, free, and open-source emulation software. QEMU's complexity is well-known, often deterring users with its array of command-line options. UTM eliminates this barrier, offering the robust features of QEMU without the complexity, making it accessible to a wider range of users.
Virtual machines on Apple devices have never been more accessible and versatile, thanks to UTM. Whether you're a developer, an enthusiast, or just someone who needs to run different operating systems on your Apple device, UTM is the tool you've been waiting for. Discover more about UTM and how it can transform your virtual machine experience on Apple devices at https://getutm.app/ and https://mac.getutm.app/.
Could you develop a Python script for VMware vCenter to retrieve the virtual machines and allow the user to control their power states? The server details are: host name 'vc.ntpro.local', username 'administrator@ntpro.local', password 'VMware1!' Please ignore certificate errors and use tls.
ChatGPT's response will be affirmative, recommending the use of the pyvmomi library to interface with VMware vCenter. It will advise on installing the pyvmomi library via pip, if not already done so.
You can find the complete Python code through this link.
For Python project management, I prefer the PyCharm Community Edition. The pyvmomi module is crucial for our tasks and can be conveniently installed via the terminal with 'pip install pyvmomi'
. PyCharm also offers a GUI for handling Python packages.
After generating a new Python file, insert the ChatGPT-provided script into PyCharm's editor. Execution may require some adjustments, like updating TLS versions. Should errors arise, ChatGPT is equipped to troubleshoot and provide solutions. Once the refined script is implemented, it successfully retrieves and manages the power states of the virtual machines from vCenter.
In a subsequent step, I prompted ChatGPT to integrate a basic GUI into the script using the pre-installed TKInter module from Python3. The resulting interface displays a virtual machine list with functional power controls. Despite a minor glitch with the "wait task" during the demonstration, the core functionality was unaffected.
Remember, repeated queries won't yield identical results when soliciting code from ChatGPT. Precision in your inquiries enhances ChatGPT's ability to generate effective code. If your code encounters issues, ChatGPT can help debug and even elucidate the script's workings in depth. Enjoy coding!
Continue reading "Streamlining vSphere Operations with ChatGPT"]]>
This document is tailored for experienced VMware professionals focused on optimizing and troubleshooting in production environments. It introduces metrics using the Triple See Method, a strategy that aligns metrics with operational management.
This is an advanced-level text, spanning over 300 pages. It's not a quick read, so Iwan 'e1' Rahabok suggested settling down with a coffee or your preferred beverage for a thorough and quiet reading session.
Continue reading "The vSphere Metrics Book - 3rd edition"]]>
‘Back at Vegas, people asked us in jest whether the Pi could ever run ESX. We decided it was worth showing that it is possible. ESX on Pi is a promise that VMware understands that the Edge includes extremely low end devices as well. Scaling with the workload and envo is important!’
I recently acquired the cutting-edge Raspberry Pi 5, aiming to set up ESXi ARM. It was tailored for the Raspberry Pi 4, but initial attempts to operate it on the Raspberry Pi 5 were met with a challenge due to the absence of UEFI support.
However, this scenario changed about a week back when Mario Bălănică, associated with the Windows on Raspberry (WoR) project, unveiled a UEFI bios specially designed for the Raspberry Pi 5. This critical development has enabled the device to install ESXi ARM.
Before proceeding, it's important to note a few prerequisites. The Raspberry Pi 5's built-in network adapter is marked an unsupported UEFI peripheral, necessitating the use of an external USB network adapter. For my setup, I chose the Maxonar USB C Ethernet Adapter, which offers reliable Gigabit LAN connectivity.
The installation procedure is quite direct. Begin by copying the UEFI files onto a microSD card formatted to FAT32. Next, transfer the ISO contents of the ESXi ARM installer onto a FAT32-compliant USB drive. Once you power up the Raspberry Pi, the installation should initiate on its own.
During my initial setup, the installation paused at 81%, hindered by the lack of a network adapter for the management network. This hiccup is overcome on the first reboot, where the USB network adapter is recognized, allowing the configuration of the management network thereafter.
The storage setup utilizes a USB interface, connecting through a robust "UGREEN M.2 Adapter NVMe SSD Hard Drive Enclosure" which houses a swift "Samsung SSD 256GB PM991 M.2 2242 42mm PCIe 3.0 x4 NVMe" drive. This USB drive plays a triple role: it's the install-boot media, the destination for the ESXi install, and the local storage for VMFS (Virtual Machine File System). You can leave it inserted after the installation completes. In future enhancements, I plan to integrate the Samsung SSD into my setup more seamlessly, using the PineBerry m.2 HATDrive, interfaced via NVMe.
In this video, I've captured the entire installation process from beginning to end, and it takes just 5 minutes to complete. The recording was done using a Video Capture Card, a Guermok USB 3.0 HDMI to USB C Audio Capture Card, and QuickTime on a Mac.
Once the installation is finalized, the next step is to start running virtual workloads. My first action was to set up a Photon virtual machine; VMware conveniently provides an ARM-compatible OVA file. The process is hassle-free, since I'm using Virtual Center to manage the ARM-based ESXi host—just a few clicks and the virtual machine springs to life.
The subsequent hurdle was setting up a Windows environment. Locating an ARM-compatible Windows ISO proved challenging, but after a diligent search, I found one. I uploaded the ISO to the local data store and configured a Windows VM. The installation of Windows was time-consuming, and the initial startup was even more prolonged. Eventually, a blue screen error interrupted the process, which requires further investigation.
Initially, I operated on a Raspberry Pi 5 with 4 GB of RAM, but soon transitioned to a more robust 8 GB model to accommodate additional VM workloads. Monitoring of the vCenter server is conducted through Aria Operations, which allows me to keep an eye on the ESXi ARM host and virtual machines via comprehensive dashboards. I've also set up Aria Operations for log management, simplifying the troubleshooting process.
In summary, the article showcases the advancements and challenges in setting up and running ESXi ARM on the Raspberry Pi 5. It highlights the technical requirements, the setup process, and the potential for hosting various virtual workloads, along with the importance of continuous monitoring and system optimization. Special thanks to Jimmy van der Mast for providing technical expertise.
Continue reading "Setting Up ESXi ARM on the Raspberry Pi 5"]]>
from pyVim.connect import SmartConnect, Disconnect
from pyVmomi import vim
import ssl
import atexit
# vCenter Server details
vc_server = 'vc.ntpro.local'
username = 'administrator@ntpro.local'
password = 'VMware1!'
# Disable SSL certificate verification (for demo purposes only, not recommended for production)
context = ssl._create_unverified_context()
# Function to connect to vCenter
def connect_to_vcenter(server, user, password):
si = SmartConnect(host=server, user=user, pwd=password, sslContext=context)
atexit.register(Disconnect, si)
return si
# Function to get all VMs
def get_all_vms(si):
content = si.RetrieveContent()
container = content.viewManager.CreateContainerView(content.rootFolder, [vim.VirtualMachine], True)
return container.view
# Power operations for the VM
def power_on_vm(vm):
if vm.runtime.powerState != vim.VirtualMachinePowerState.poweredOn:
task = vm.PowerOnVM_Task()
wait_for_task(task)
print(f"VM '{vm.name}' is powered on.")
else:
print(f"VM '{vm.name}' is already powered on.")
def power_off_vm(vm):
if vm.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
task = vm.PowerOffVM_Task()
wait_for_task(task)
print(f"VM '{vm.name}' is powered off.")
else:
print(f"VM '{vm.name}' is already powered off.")
def suspend_vm(vm):
if vm.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
task = vm.SuspendVM_Task()
wait_for_task(task)
print(f"VM '{vm.name}' is suspended.")
else:
print(f"VM '{vm.name}' cannot be suspended because it is not powered on.")
def resume_vm(vm):
if vm.runtime.powerState == vim.VirtualMachinePowerState.suspended:
task = vm.PowerOnVM_Task()
wait_for_task(task)
print(f"VM '{vm.name}' is resumed.")
else:
print(f"VM '{vm.name}' is not suspended.")
def create_snapshot(vm):
task = vm.CreateSnapshot_Task(name='Snapshot', description='Created by script', memory=False, quiesce=False)
wait_for_task(task)
print(f"Snapshot for VM '{vm.name}' created.")
# Wait for vCenter task to complete
def wait_for_task(task):
task_done = False
while not task_done:
if task.info.state == vim.TaskInfo.State.success:
return
if task.info.state == vim.TaskInfo.State.error:
print(f"Task failed: {task.info.error}")
raise Exception("Task failed")
# Main script logic
if __name__ == "__main__":
si = connect_to_vcenter(vc_server, username, password)
vms = get_all_vms(si)
vms_dict = {vm.name: vm for vm in vms}
print("List of VMs:")
for vm_name in vms_dict.keys():
print(vm_name)
selected_vm_name = input("Enter the name of the VM you wish to manage: ")
vm = vms_dict.get(selected_vm_name)
if vm:
print(f"Selected VM: {selected_vm_name}")
action = input("Choose an action: (on) Power On, (off) Power Off, (suspend) Suspend, (resume) Resume, (snapshot) Create Snapshot: ").lower().strip()
if action == 'on':
power_on_vm(vm)
elif action == 'off':
power_off_vm(vm)
elif action == 'suspend':
suspend_vm(vm)
elif action == 'resume':
resume_vm(vm)
elif action == 'snapshot':
create_snapshot(vm)
else:
print("Invalid action selected.")
else:
print(f"VM '{selected_vm_name}' not found.")
The script is designed with safety and ease of use in mind, leveraging Python’s clear syntax and PyVmomi’s powerful bindings to the vSphere API. It’s a tool that can be expanded, customized, and integrated into larger workflows or dashboards. I'm hosting the source code, along with some screenshots on my GitHub page.
Continue reading "Mastering vCenter Operations with Python: A Script to Manage Your VMs"]]>
import requests
import json
# Replace these with your vROps details
vrops_host = 'https://ariaops.ntpro.local'
username = 'admin'
password = 'VMware1!'
# Endpoints
auth_url = f'{vrops_host}/suite-api/api/auth/token/acquire'
vm_search_url = f'{vrops_host}/suite-api/api/resources'
metrics_url_template = f'{vrops_host}/suite-api/api/resources/{{}}/stats'
# Authenticate and Get Token
headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
payload = {'username': username, 'password': password}
response = requests.post(auth_url, headers=headers, data=json.dumps(payload), verify=False)
token = response.json().get('token')
if not token:
raise Exception("Authentication failed")
# Update headers with the token
headers['Authorization'] = f'vRealizeOpsToken {token}'
# Function to get the ID of a VM by its name
def get_vm_id(vm_name):
vm_url = f'{vrops_host}/suite-api/api/resources'
response = requests.get(vm_url, headers=headers, verify=False)
if response.status_code == 200:
resources = response.json()['resourceList']
for resource in resources:
if resource['resourceKey']['name'] == vm_name:
return resource['identifier']
return None
# Function to get metrics for a VM by its ID
def get_vm_metrics(vm_id):
metrics_url = f'{vrops_host}/suite-api/api/resources/{vm_id}/stats'
response = requests.get(metrics_url, headers=headers, verify=False)
if response.status_code == 200:
return response.json()
return None
# Main script execution
vm_id = get_vm_id('raspberry')
if vm_id:
vm_metrics = get_vm_metrics(vm_id)
if vm_metrics:
print("Metrics for VM 'raspberry':")
print(json.dumps(vm_metrics, indent=4))
else:
print("Failed to retrieve metrics for VM 'raspberry'")
else:
print("VM 'raspberry' not found")
Through this script, you can effortlessly access the Aria Ops REST API, which is designed to safeguard and validate your data retrieval missions. Once authenticated, you can harvest a rich dataset, which can then be woven into your custom solutions or operational dashboards. By leveraging this script, raw data can be transformed into actionable insights, enhancing virtual machine monitoring and management.
The source code, complete with an example of the output and some screenshots, is hosted on my GitHub page.
Continue reading "Harvesting Data: Python's Gateway to Aria Operations Metrics"]]>The VMware Aria Operations Management Pack Builder is a self-contained tool designed to facilitate the development of bespoke management packs for VMware Aria Operations. It offers a user-friendly, code-free approach to importing data from external APIs. This tool allows the creation of new resources or the enhancement of existing VMware and third-party resources by adding new data, establishing relationships, and integrating events.
TensorFlow, renowned for its versatility in machine learning, is particularly effective for object detection projects on Raspberry Pi. This lightweight platform seamlessly integrates with the modest hardware of the Raspberry Pi, making it ideal for real-time object detection tasks. By utilizing a camera module with TensorFlow, users can develop efficient, on-device models capable of identifying and categorizing objects in the camera's field of view.
The Raspberry Pi 5 uses a Python script tailored for object detection, processing images from its camera into structured JSON data. Additionally, it operates a web server that presents a REST API to Aria Operations, enabling the collection and statistical analysis of object detection data processed by TensorFlow.
{"person": 1, "bicycle": 0, "car": 17, "motorcycle": 0, "airplane": 0, "bus": 0, "train": 0, "truck": 1, "boat": 0, "traffic": 0, "fire": 0, "stop": 0, "parking": 0, "bench": 0, "bird": 0, "cat": 0, "dog": 0, "horse": 0, "sheep": 0, "cow": 0, "elephant": 0, "bear": 0, "zebra": 0, "giraffe": 0, "backpack": 0, "umbrella": 0, "handbag": 0, "tie": 0, "suitcase": 0, "frisbee": 0, "skis": 0, "snowboard": 0, "sports": 0, "kite": 0, "baseball": 0, "skateboard": 0, "surfboard": 0, "tennis": 0, "bottle": 0, "wine": 0, "cup": 0, "fork": 0, "knife": 0, "spoon": 0, "bowl": 0, "banana": 0, "apple": 0, "sandwich": 0, "orange": 0, "broccoli": 0, "carrot": 0, "hot": 0, "pizza": 0, "donut": 0, "cake": 0, "chair": 0, "couch": 0, "potted": 0, "bed": 0, "dining": 0, "toilet": 0, "tv": 0, "laptop": 0, "mouse": 0, "remote": 0, "keyboard": 0, "cell": 0, "microwave": 0, "oven": 0, "toaster": 0, "sink": 0, "refrigerator": 0, "book": 0, "clock": 0, "vase": 0, "scissors": 0, "teddy": 0, "hair": 0, "toothbrush": 0}
The package file and the corresponding Python code can be accessed and downloaded from my GitHub repository. You can find guidance on setting up TensorFlow on a Raspberry Pi in a previous article I have authored.
Continue reading "TensorFlow Management Pack For VMware Aria Operations"]]>The initial access to the AriaOps API is based on authentication with a username, password and authentication source provided in a JSON body. Don’t try to do this with basic authentication, because it will fail. Simply construct a POST call with a JSON body, and get the ops:token. This token is needed for any subsequent REST calls against AriaOps.
After receiving a 200 (OK) response. The ops:token has successfully been retrieved. It will expire after 6 hours. Any subsequent requests must include this token.
To construct new API calls, you can use either the API-Programming-Operations Guide or the Swagger UI. In the Swagger UI, it is fairly easy to test API calls with the “Try it out” feature.
This also includes a valid ops:token in the test request. You can copy the Curl request from Swagger and paste it into a new request in Postman. This automatically translates the curl request into a valid REST API call, and fills all the parameters and creates a valid JSON body.
I've created some sample requests and exported the Postman collection to GitHub. Any comments or suggestions on X please.
Continue reading "VMware Aria Operations API – Postman Collection"]]>Creating a management pack for Aria Operations involves so many steps that I’ve decided to split this project into multiple blog articles. I’m using SimpleMind as a mind map tool to keep track of all the different components. All the source code will be hosted on GitHub. Please enhance or adjust this code to your own needs by forking or copying the code.
Before I started writing these articles, I’ve built a working prototype. The reason I’ve built this prototype was to see if it's even possible to get everything to work. The prototype was also used during a recent training course delivery of the VMware vRealize Operations: Advanced Use Cases [v8.x].
The beta version of the Python code, including a working PAK file and some screenshots, are already available on GitHub. In upcoming articles, I’ll enhance the code and create new versions. Currently, the roadmap contains an API with authentication, dashboards, events, alerts, etc. Feel free to bring up any suggestions on my X channel or in the comments on GitHub.
Continue reading "Integrating the Raspberry Pi Pico with Aria Operations - Introduction"]]>This video explains the voting magnetism used by vSAN during a site and a subsequent witness failure. Adaptive quorum control assigns more votes to the surviving site to handle a subsequent failure of the witness site. By assigning 3 votes to the components on the surviving site, there’s still a majority. Even when an additional host on the preferred site is lost, there are still enough votes to get a majority, so the virtual machine is still functioning.
The Excalidraw source file is available on GitHub.
Continue reading "vSAN Adaptive Quorum Control in a stretched cluster"]]>In the first part of this video, you will get an explanation of how to install all the opencv-python. OpenCV is a Python library that allows you to perform image processing and computer vision tasks. The installation of opencv results in an error. Before installing opencv-python, you should first remove the externally managed file to step away from the use of environments.
sudo rm /usr/lib/python3.11/EXTERNALLY-MANAGED
sudo pip3 install opencv-python
Now, we're ready to install TensorFlow by using the following command:
sudo pip3 install tensorflow
With the next command, we’re installing VCZone. This is a computer vision package that makes it easy to run image processing and AI functions.
sudo pip3 install cvzone
The installation is complete now. There’s one additional step you need to perform. When you copy the rpi5-bookworm-tflite git hub repository to your Raspberry Pi 5, you have to change the path in the tflitetestrpicamera.py file and point it to the correct location of the labels and TensorFlow file.
model_path='/home/esloof/efficientdet_lite0.tflite'
label_path='/home/esloof/labels.txt'
The initial version of the tflitetestrpicamera.py didn’t have any support autofocus for the Camera Module 3. After reading a great article about “How To Use Raspberry Pi Camera Module 3 with Python Code” by Les Pounder I’ve added a few extra lines of code that import the libcamere for controlling the camera, and I’ve also enabled autofocus.
Continue reading "TensorFlow Lite on a Raspberry Pi 5 with Camera Module 3"]]>
Comments on X please.
When I wrote my earlier article about controlling the Elgato Key Light Air, I’ve spent some reasonable time on coding. My Python skills are still very minimal, and I’m following an online training course to get up to speed. But, in the meantime, who’s going to write my code? Maybe ChatGPT can do the trick. The last time I used it was during the “VMware {code} Lab: Learning to Code Using ChatGPT API Interface” workshop hosted by Eric Nielsen at the VMware Explore in Las Vegas. This workshop was a great introduction to the possibilities of ChatGPT. So, my question to ChatGPT was simple:
Are you familiar with the Raspberry Pi Pico?
And ChatGPT answered:
Yes, I'm familiar with the Raspberry Pi Pico. It's a microcontroller board developed by the Raspberry Pi Foundation. Unlike the traditional Raspberry Pi boards, which are single-board computers, the Raspberry Pi Pico is a microcontroller board based on the RP2040 chip, designed for embedded and low-level programming tasks.
The Pico is quite popular for its affordability, ease of use, and versatility, making it suitable for a wide range of projects, including embedded systems, IoT (Internet of Things) devices, and various DIY electronics projects. It's notable for its RP2040 chip, which features dual ARM Cortex-M0+ cores and provides a variety of I/O options, making it a flexible choice for programming and hardware interfacing.
The next question was, Are you familiar with MicroPython?
And ChatGPT answered:
Absolutely! MicroPython is a software implementation of the Python programming language optimized to run on microcontrollers and embedded systems. It's designed to be lean and efficient while providing a Python 3 programming environment for microcontroller boards.
MicroPython offers a subset of the Python language, adapted to the constraints and capabilities of microcontrollers. It allows developers to write Python code that can directly interact with hardware, making it popular for IoT, robotics, and embedded systems projects. Its simplicity and ease of use, along with its rich library support, make it a preferred choice for those familiar with Python programming who want to work with microcontrollers.
The Raspberry Pi Pico, for instance, can be programmed using MicroPython among other programming languages, offering an accessible and beginner-friendly approach for handling hardware tasks.
Great, let’s see if I can formulate a good question that can be used by ChatGPT to write the correct code for hosting a web server that toggles the led. So, the question is: Can you write a Micropython program for Raspberry Pi Pico W, which hosts a web server and offers a page that switches on and off the onboard led?
The answer was stunning:
To create a MicroPython program for the Raspberry Pi Pico, which hosts a web server and allows you to toggle the onboard LED via a webpage, you'll need to use the Pico's networking capabilities and the MicroPython's ure and network modules. Here is an example code that achieves this functionality:
Here are the steps to use this code:
import network
import machine
import usocket as socket
led_pin = machine.Pin(25, machine.Pin.OUT)
html = """<!DOCTYPE html>
<html>
<head> <title>Raspberry Pi Pico LED Control</title> </head>
<body>
LED Control:
<form method="get" action="toggle">
<button type="submit">Toggle LED</button>
</form>
</body>
</html>
"""
# Set up WiFi connection
wlan = network.WLAN(network.STA_IF)
wlan.active(True) # Activate the wireless interface
wlan.connect('YOUR_WIFI_SSID', 'YOUR_WIFI_PASSWORD') # Replace with your SSID and password
# Initialize socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('', 80))
s.listen(5)
while True:
conn, addr = s.accept()
print("Got a connection from %s" % str(addr))
request = conn.recv(1024)
request = str(request)
print("Content = %s" % request)
led_on = request.find('/toggle')
if led_on == 6:
led_pin.value(not led_pin.value()) # Toggle LED
response = html
conn.send('HTTP/1.1 200 OK\n')
conn.send('Content-Type: text/html\n')
conn.send('Connection: close\n\n')
conn.sendall(response)
conn.close()
Note: This code sets up a basic web server with minimal error handling. In a production environment, you might want to add error handling and security measures to make the system more robust and secure.
After adding a few extra lines of code to receive the IP address, I’ve copied and pasted the code in Thonny and hit the run button…
while wlan.isconnected() == False:
print('Waiting for connection...')
sleep(1)
print(wlan.ifconfig())
I must admit I was expecting an error code, but I got the IP address and visited the webpage. After clicking the button, no Toggling LED.
Got a connection from ('192.168.178.122', 52756)
Content = b'GET /favicon.ico HTTP/1.1\r\nHost: 192.168.178.46\r\nConnection: keep-alive\r\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36\r\nAccept: image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8\r\nReferer: http://192.168.178.46/toggle?\r\nAccept-Encoding: gzip, deflate\r\nAccept-Language: en-US,en;q=0.9,nl;q=0.8\r\n\r\n'
I discovered ChatGPT was using the led address of a Pico. The led address of the Pico W model is different.
led_pin = machine.Pin("LED", machine.Pin.OUT)
Instead of:
led_pin = machine.Pin(25, machine.Pin.OUT)
So, I changed the address and gave it another try. Amazingly, it worked! I’ve used SimpleMind to structure and create this article.
All the resources are available in this mind map: Raspberry_Pi_Pico_LED_Control.smmx. This is the link to the original ChatGPT discussion. The following link leads to the working MicroPython code on GitHub.
Comments on X please.
]]>At the end of the day, the announcements of the Recognition Awards are made, and this year I’m the winner. I’m officially “VCI of the Year EMEA 2023”. It’s a shame I wasn’t present to receive the award in person, because I’ve already attended the VMware Explore in Las Vegas, but nevertheless, I feel proud of winning this achievement, so I recorded a short thank you video.
The VMware Certified Instructor of the Year Badge is presented once a year to a VMware Instructor for their outstanding performance. A VMware Certified Instructor with this badge has maintained the highest Effective Effort Score in the region throughout the year, delivering a wide range of challenging courses with exceptional feedback from both customers and partners.
Tweet @esloof on X
]]>This book consists of the following chapters: