Advertisement · 728 × 90
#
Hashtag
#TELEGRAF
Advertisement · 728 × 90
Preview
Adding a Balcony Solar Kit To Our Capacity Our solar install is not far off three years old. We've got two east facing panel arrays providing 3.2kWp of PV capacity, supported by a `6kWh` battery. However, only having an easterly exposure has always been a _little_ bit of a bugbear. Although the panels work throughout the day, they start to wane around lunch time, wasting some of the potential that a sunny afternoon can bring. The house itself doesn't have a usable roof with a westerly (or southerly) exposure. The _garage_ , however, does. Although it obviously won't generate as much as the existing install, I decided to order a balcony solar kit for the garage, so that we could mop up some of the afternoon sun. This post describes the install process along with the initial results (despite the weather being a bit... British). * * * ### Balcony Solar In The UK These kits sometimes get referred to as "plug in" solar because, in Europe, they're widely available with a plug allowing them to be plugged into a spare socket. _Balkonkraftwerk_ kits are, apparently, hugely popular with renters in the German market. In the UK, however, it's **not** OK to simply plug into a socket and a dedicated circuit is required instead. It's not that plugging it in won't work, but that wiring regulations (and therefore home insurance) don't currently support doing so. However, that **is changing**: following a study, the government announced that it will legalise plug in solar, with an amendment to BS 7671 expected to land this year. This is, overall, a great outcome as it will make solar more readily available to renters. However, **my** system isn't connected via a plug: design and setup pre-dated the Government's announcement so I've had it hardwired in to comply with the existing regulations1. * * * ### The Kit My kit consisted of * 2 Trina 440w panels * A Hoymiles HMS-800w-2T 800w grid tied micro inverter * Various Fixings To suit my planned layout I also ordered a couple of extra bits * 2 pairs of DC Solar extension leads * A couple of pairs of hinged solar mounts * * * ### Install Topology Although I ordered the kit with the garage roof in mind, I was actually jumping the gun a bit. Our garage has an asbestos concrete roof, which we're planning on getting replaced this year. It's obviously not possible to mount panels on there without risk of disturbance, so I either needed to wait, or to install the panels into suitable temporary locations. I'm not _great_ at waiting, so I went for the second option. We've got a small lean-to shed attached to the end of the garage, and a pergola style walkway2 up the side, so I planned to place one panel on the shed and the other over the walkway: The microinverter would then sit inside the garage, protected from the elements. Although the shed sits a little lower than the garage roof (causing shade when the sun's to the south), it seemed like a good location to catch the waning sun and deliver electrons when we need them most: during peak pricing4 at the end of the day. * * * ### The Install With the challenges of working at rooftop height removed, installing panels isn't particular difficult. They **do** weigh a little more than you might expect, but are still within the bounds of a one-person job. To sit the panel at an angle and create a suitable mountpoint, I screwed a bit of wood across the top of the pergola and then used hinged mounts to fix the panel to it: So that the panel wouldn't flap in the wind, I also used hinged mounts to secure the other end to the cross beams: Before starting, I had thought that the shed mount would be the easiest of the two but, at install time, ran into an issue. So that I could make it tilt and face west, I'd ordered hinge mounts, assuming that they could be fitted along any edge of the panel. However, the panels only have fixing holes on the _long_ sides. Although it'd probably have been fine, I didn't fancy drilling new holes into a brand new panel, so decided to use flat mounts instead The downside of this is that it increases the likelihood of the panel being in shade. It's not a great _long term_ placement for it but, I hoped, would be OK for a while. DC cables run from the panels into the garage and back to the micro-inverter (which supports two strings, allowing me to track the two panels separately). The inverter's AC output is wired back to a fused isolation switch: To protect future sparkies, the warning sticker in our main fusebox has also been updated to list the garage as a generation location. * * * ### Initial Start-Up We had a _little_ bit of a false start. It was time for the grand switch on, so I connected the panels and flipped the isolation switch. The inverter's LED started blinking indicating that it was starting up. Because it can take up to an hour to sync with grid frequency, I walked away with the intention of checking back a little later. However, before that, I got a somewhat alarming message from my partner: I went downstairs and, as soon as I stepped out of the back door, was greeted with a very definite smell of burning. I went into the garage, flipped the isolator switch and then realised that it didn't smell inside, indicating that it was _probably_ coming from around the panels or their connectors. I checked the connections for each, then climbed up on a ladder to check the panels themselves. The shed panel didn't have any smell around it, the walkway one had more of a _hint_. The smell _was_ definitely dissipating though, so it was hard to ignore the coincidence of it coming and going around the time that things were turned on and off. I did a few extra checks * Pointed my thermal camera at _everything_ : no sign of anything being hot * Put my multimeter across the end of the DC extension leads - both panels were giving a good 37V, suggesting no issue with the leads or the panel end connectors I was **sure** that everything looked fine and yet, the smell... I had other things that I needed to be doing, so I decided to leave things disconnected until the next day, when I'd be more able to monitor it. Later that night, though, I was reading the local news and spotted this: We're far enough away that the smoke wasn't visible, but close enough (with the help of the wind) to have caught the smell. The next day, I connected things back up and, sure enough... no burning smell. * * * ### Monitoring Hoymiles inverters support both remote and local monitoring. Once I'd figured out getting the inverter onto our wifi it started reporting into the S-Miles Cloud: I haven't played around with it much, but it seems OK. * * * #### Connecting Telegraf I _already_ monitor our existing solar install so wanted to get metrics into InfluxDB. Happily, it turned out that someone has already created a telegraf plugin to connect to the inverter and retrieve metrics. I cloned the repo down, built the plugin and then moved the resulting binary to the directory that I tend to drop telegraf plugins into: git clone https://github.com/liwde/telegraf-hoymiles-wifi.git cd telegraf-hoymiles-wifi go build -o hoymiles ./cmd/hoymiles sudo mv hoymiles /usr/local/src/telegraf_plugins/ The plugin requires a separate config file to define inverters that it should connect to: cat << EOM > /usr/local/src/telegraf_plugins/hoymiles.conf [[inputs.hoymiles_wifi]] hostname = "192.168.13.227" EOM With everything in place, I configured `telegraf` by adding the following to `/etc/telegraf/telegraf.conf`: [[inputs.execd]] command = ["/usr/local/src/telegraf_plugins/hoymiles", "-config", "/usr/local/src/telegraf_plugins/hoymiles.conf"] signal = "none" I reloaded the telegraf service and points like the following started being written into InfluxDB: hoymiles_dtu,dtu_serial_number=xxxx dtu_energy_daily=52i,dtu_power=70.6 1774600109000000000 hoymiles_inverter,dtu_serial_number=xxxx,inverter_serial_number=xxxx inverter_power=70.6,inverter_temperature=13,inverter_voltage=246.4,inverter_frequency=50.03,inverter_current=0.28 1774600109000000000 hoymiles_pv,dtu_serial_number=xxxx,inverter_port_number=1,inverter_serial_number=xxxx pv_energy_daily=27i,pv_energy_total=84i,pv_voltage=31.5,pv_current=1.23,pv_power=38.8 1774600109000000000 hoymiles_pv,dtu_serial_number=xxxx,inverter_port_number=2,inverter_serial_number=xxxx pv_voltage=32.5,pv_current=1.09,pv_power=35.7,pv_energy_daily=25i,pv_energy_total=38i 1774600109000000000 * * * ##### Reporting Gaps One thing that I hadn't really thought about before, is that the inverter becomes unreachable overnight. It's powered by the panels themselves, so as the yield drops away the inverter goes offline: Happily, though, the telegraf plugin copes with this just fine. * * * #### Grafana As a start point, I added cells to my existing Grafana dashboard: I also created a new dashboard dedicated to the Hoymiles inverter: * * * #### Output Level Alerting Although it doesn't _really_ add much beyond some psychological safety, I created a Grafana alert to warn me if the inverter's AC output ever pushes towards its max capacity: The alert is driven by a simple query to get the maximum reported current in the queried period: SELECT max("inverter_current") FROM "Systemstats"."autogen"."hoymiles_inverter" WHERE $timeFilter The maximum output listed on the back of the inverter is 3.7A. However, 800W at 240v is around 3.3A so I decided to flag if it went much above that. * * * ### Limiting Export Levels In order to export electricity and receive payment for it, solar installs have to be registered with the local electricity distributor (the DNO). The idea behind this is that it allows the DNO to maintain an understanding of what might be fed back into to the local network. Following install of our main system, we received approval to export up to 3.6kW. I track our metered electricity usage via our glow IHD, so queried this data to check whether we've ever been anywhere near that level: from(bucket: "Systemstats") |> range(start: 1) |> filter(fn: (r) => r["_measurement"] == "smart_meter") |> filter(fn: (r) => r["_field"] == "export_now") |> aggregateWindow(every: 1d, fn: max, createEmpty: false) |> group() |> max() Our highest recorded export rate was `2.827kW`3. Although we've never hit our approved level, I still needed to make sure that the addition of the new panels wouldn't risk taking us past it. Because there's a data cable between our main install and the fuse box, our Solis inverter is able to read import and export rates from a meter which sits between our leccy supplier's meter and our consumer unit. The energy provided by the new panels feeds into the consumer unit side, so any export that it generates would be included in the Solis inverter's readings. I double checked that the Solis inverter was configured appropriately: As a fail-safe, I also added an alert to Grafana so that I'll be paged if our Smart Meter ever reports greater than 3600W of export: SELECT max("export_now") AS "export_now" FROM "Systemstats"."autogen"."smart_meter" WHERE $timeFilter GROUP BY time($__interval) FILL(null) I don't expect that this should _ever_ fire but, if for some reason our export does spike, I should find out in good time rather than relying on eventually being contacted by the DNO. * * * ### Performance Unfortunately, the install missed the recent spate of really sunny weather and most days have suffered from prolonged cloudy spells. However, as I'd hoped, the panels **do** help us produce electricity later into the day, taking advantage of the afternoon sun. However, the horizontal panel on the shed roof is outputting **much** less power than I'd been hoping for: That's partly a consequence of it being horizontal, but is also probably a time of year thing - it covers a part of the sky that the sun won't be high in until later in the year. Still, it's probably a sign that that location is a pretty poor use of a panel, so I'm considering relocating it to the walkway instead. What matters **most** , though, is that the panels are delivering energy some way into peak, supporting the battery and shaving some small amount off the power that we draw from grid as peak pricing starts. * * * ### Calculating Break-Even I've put quite a lot of effort into accurately tracking break-even of our main install, so the introduction of the new system presented a number of questions: * Should I track break-even for this system separately? * Should I update the existing calculations to factor in new system output (and cost) or only report separately? For now, I've decided to track it entirely separately - the kit cost hundreds, rather than thousands of pounds, so it didn't _quite_ seem worth risking (inevitable) mistakes updating existing calculations. Because there's no battery and no (separate) export, the calculations are quite a bit simpler, feeding into a new dashboard that's similar to my existing one: * * * #### The Downside Of Cheap Electricity It's a "nice" problem to have, of course, but being on Octopus Agile means that, on average, we pay less per unit: Break-even time calculations are based on the cost of the electricity that we'd have to have bought if solar didn't provide it. But, realistically, if we didn't have Solar (or more accurately, the battery) we _probably_ wouldn't actually be on Agile in the first place (and certainly wouldn't achieve such a low average unit price). To get an idea of the difference, we need to look at Fixed price tariffs instead: * British Gas: 25.817 p/kWh5 * Octopus: 25.86 p/kWh * EDF: 27.69 p/kWh Splitting the difference between those prices gives a unit cost of 26.73 p/kWh6. Although only a matter of pence more, it's still a 40.7% increase: if I used fixed rates in the calculations, the time to break-even would almost halve! With that in mind, the current projected break-even doesn't look quite so bad, especially given that the figures are currently based on a single cloudy week. * * * ### Conclusion Although the balcony solar kit only provides 25% of the capacity provided by our existing install, the capital cost was **significantly** lower (helped, of course, by not needing scaffolding). The break-even point should, therefore come quite a bit sooner. Once the garage roof has been replaced, the panels will move to a location where they'll be better exposed to the sun (and for longer). The fact that they'll be most active during times of peak pricing should further accelerate them down the path to payback. Time will tell, but I wouldn't be overly surprised if the break-even period ends up being just a few years (which will be all the more likely for others if these kits ultimately end up in the Aldi middle aisle). * * * 1. This isn't a bad thing really, even if the timing is a _little_ annoying ↩ 2. Which I've always hated anyway ↩ 3. Which, rather than being panel driven, is almost certainly from me telling the battery to dump to grid during a savings session or similar ↩ 4. Essentially, this panel's role will be to support the battery in supplying our peak usage. The battery already doesn't make economic sense so adding capacity to it isn't really an option. ↩ 5. In keeping with Centrica's awful customer service, you need to provide usage details to see tariff prices. Which is only part of why friends don't let friends use British Gas. ↩ 6. Though, between writing this and publication, prices are expected to come down a bit as a result of Ofgem lowering the pricing cap ↩

New #Blog: Adding a Balcony Solar Kit To Our Capacity
Author: Ben Tasker

www.bentasker.co.uk/posts/blog/house-stuff/a...

#electrical #housestuff #solar #telegraf

0 0 0 0
Preview
InfluxData InfluxData provides the leading time series platform to instrument, observe, learn and automate any system, application and business process across a variety of use cases.

The latest update for #InfluxDB includes "What is MRO? Maintenance, Repair, and Operations Explained" and "#Telegraf Enterprise Beta is Now Available: Centralized Control for Telegraf at Scale".

#monitoring #devops #timeseries https://opsmtrs.com/2W5CAx0

0 1 0 0
Post image

Fernfotografie
• Naturwissenschaftlich-Technische Plaudereien • 1908 •
🌐 epilog.de/7602

#fernfotografie #historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #zeitreisen

0 0 0 0
Preview
InfluxData InfluxData provides the leading time series platform to instrument, observe, learn and automate any system, application and business process across a variety of use cases.

The latest update for #InfluxDB includes "#Telegraf Enterprise Beta is Now Available: Centralized Control for Telegraf at Scale" and "Unifying Telemetry in Battery Energy Storage Systems".

#monitoring #devops #timeseries https://opsmtrs.com/2W5CAx0

0 0 0 0
Post image

Hans Dominik: Die Transradio-Betriebszentrale in Berlin
Die Gartenlaube • 3.8.1922
🌐 epilog.de/5134

#hansdominik #historischertext #geschichte #damals #telegrafenamt #telegrafie #telegraf #telekommunikation #zeitreisen

1 0 0 0
HomeAssistant's history graph widget. The top part shows categorical data, with multiple coloured blocks along the timeline for “Sound meter ann[otation]”. The bottom part shows a single continuous value from sensor “Bt-Proxy-6Fb14C UNI-T Mini Sound Meter UT353BT”. The values are between 40 and 60 dBA, with a single data point spike to 90.

HomeAssistant's history graph widget. The top part shows categorical data, with multiple coloured blocks along the timeline for “Sound meter ann[otation]”. The bottom part shows a single continuous value from sensor “Bt-Proxy-6Fb14C UNI-T Mini Sound Meter UT353BT”. The values are between 40 and 60 dBA, with a single data point spike to 90.

#homeAssistant / #homeLab question

I store some HA historical values into InfluxDB. The app has a Telegraf frontend.

One sensor collect continuous data, while I have added a text input_helper to collect manual annotations about this data.

Is there a way in […]

[Original post on piaille.fr]

0 0 0 0
Original post on kittsteiner.blog

I had been playing with Grafana, InfluxDB, Telegraf and Prometheus the last two weeks and can now share my story of what I finally use to replace my Netdata instances and why.

[…]

kittsteiner.blog/blog/2026/migrated-from-... #Ansible #DevOps #Grafana […]

0 0 0 0
Post image

Hans Dominik: Alles drahtlos
• Die Woche • 27.5.1922 •
🌐 epilog.de/5564

#hansdominik #historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #zeitreisen

0 0 0 0
Post image

Hans Dominik: Die Sendestadt der Zukunft
• Die Woche • 9.10.1920 •
🌐 epilog.de/7758

#hansdominik #nauen #sendernauen #historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #zeitreisen

0 0 0 0
Post image

Die erste telegrafische Zeitungsdepesche
• Die Abendschule • 26.7.1888 •
🌐 epilog.de/6342

#historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #zeitreisen

0 0 0 0
Post image

Hans Dominik: Der Funkenturm in Nauen
• Die Woche • 27.6.1914 •
🌐 epilog.de/4060

#nauen #hansdominik #historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #zeitreisen

0 0 0 0
MultiSearch Tag Explorer MultiSearch Tag Explorer - Explore tags and search results by aéPiot

#CHIMENTICOOK #RIVER
aepiot.ro/advanced-sea...
#SRPSKI #TELEGRAF
multi-search-tag-explorer.aepiot.com/advanced-sea...

aepiot.ro

0 0 0 0
Preview
Looking for the Perfect Dashboard: InfluxDB, Telegraf, and Grafana – Part XLIX (Monitoring Unofficial Veeam ONE Node Exporter) - The Blog of Jorge de la Cruz Greetings friends, in case you are not aware. Veeam has released an early release of Veeam Software Appliance; a pre-built, pre-hardened, predictable linux appliance that can be deployed super fast and secure. Anton Gostev talks about initial numbers in...

In this blog post by Grafana Champion Jorge de la Cruz, he walks through how to monitor an unofficial Veeam ONE Node Exporter using #Grafana, #InfluxDB, and #Telegraf.

1 0 0 0
Preview
Playing Around With Woodpecker CI Last weekend, I decided to stand up Woodpecker CI so that I could have a play around with it. In my working life, I've been exposed to a (horrifying) range of CI/CD systems but (perhaps as a consequence) have never really felt much desire to run anything similar at home. But, I was in the mood to play around with something and this crossed my mind first. Most of my projects live in a self-hosted Gitlab instance, so I needed to hook Woodpecker up to that. This post talks about deployment and experimentation, including automating rebuild of some container images. * * * ### Install #### Gitlab Pre-Config Woodpecker doesn't have it's own authentication system, instead relying on the VCS system that it's connected to (forges in Woodpecker parlance). As a result, it needs to be provided with an OAuth application secret generated within the forge. To create those, I logged into Gitlab and went `Settings` -> `Applications` -> `Add new Application`. I also needed to login as admin to permit Gitlab to make requests to Woodpecker's LAN IP (`Admin` -> `Settings` -> `Network`): * * * #### Deployment Woodpecker CI pipeline steps run within Docker containers so, although it's possible to do a native install of Woodpecker, it's really much easier to spin it up with docker compose. I only wanted Woodpecker to be reachable from within the LAN, so I didn't bother allocating a subdomain to it1. The deployment consists of two components * Woodpecker server: the thing that API requests are made against (as well as serving the web interface) * The agent: a container which spins up worker containers. This can run on a different machine (and there can be multiple of them) Communications between the agent and the server are authenticated with a shared secret. To generate a suitable secret, I ran: openssl rand -hex 32 My `docker-compose` entries looked like this: woodpecker-server: image: woodpeckerci/woodpecker-server:v3 container_name: woodpecker-server ports: - 8000:8000 # Make the RPC port available to agents deployed # on other boxes - 9000:9000 volumes: - /srv/files/woodpecker/data:/var/lib/woodpecker/ environment: # Allow registration (users have to be a user in # the forge anyway) - WOODPECKER_OPEN=true # I used an IP because I don't care about external # access. - WOODPECKER_HOST=http://192.168.13.25:8000 # Use Gitlab - WOODPECKER_GITLAB=true # Provide the client id and secret # generated for the application in Gitlab - WOODPECKER_GITLAB_CLIENT=<redacted> - WOODPECKER_GITLAB_SECRET=<redacted> # Use your real gitlab url here - WOODPECKER_GITLAB_URL=https://gl.example.com # Provide the secret generated with # openssl rand -hex 32 # # Note: the same secret must be provided to # the agent - WOODPECKER_AGENT_SECRET=<redacted> woodpecker-agent: image: woodpeckerci/woodpecker-agent:v3 container_name: woodpecker-agent command: agent restart: always depends_on: - woodpecker-server volumes: - /srv/files/woodpecker/agent:/etc/woodpecker - /var/run/docker.sock:/var/run/docker.sock environment: # Because they're in the same docker network # we can use container name # # If this were on another box, we'd need to # use the system name/IP - WOODPECKER_SERVER=woodpecker-server:9000 # This must be the same secret as provided to # the server - WOODPECKER_AGENT_SECRET=<redacted> After saving the file, I brought the containers up: docker compose up -d The server instance immediately crashed out: > {"level":"warn","time":"2025-12-14T17:26:37Z","message":"no sqlite3 file found, will create one at '/var/lib/woodpecker/woodpecker.sqlite'"} > > {"level":"error","error":"unable to open database file: no such file or directory","time":"2025-12-14T17:26:37Z","message":"error running server"} It's supposed to create a database at startup but seemingly wasn't able to. I checked the upstream dockerfile and noted that it ran under a user with UID of 1000, so I updated permissions on the local directory sudo chown -R 1000:1000 /srv/files/woodpecker/data/ docker compose up -d The container came up cleanly this time. Hitting the Web interface (port 8000) prompted me to log in with Gitlab: It was a little slow, but logging in worked. * * * ### What Now? * * * #### An Image Build Pipeline I recently moved our music to Gonic and, as part of that, built a Wolfi based container image to sync stars between Gonic and Last.FM. Wolfi is a rolling-release (un)distro benefiting from regular CVE remediation2, so there's tangible benefit to periodically rebuilding container images that are based upon it. This seemed like a good place to start. I added the Repo to Woodpecker My git repo _literally_ consisted of a `Dockerfile` and a README, so my intention was to use the docker-buildx plugin to build the image. In the repo, I created `.woodpecker/build.yaml`: when: - event: push branch: main - event: cron cron: Daily steps: - name: publish image: woodpeckerci/plugin-docker-buildx settings: platforms: linux/amd64 repo: registry.example.com/utilities/gonic-last-fm-sync-docker registry: registry.example.com tags: latest Committing and pushing this triggered CI, which _immediately_ failed: The error message is pretty self explanatory but, previous versions of Woodpecker automatically considered the plugin privileged. That changed relatively recently. The solution was to add an environment variable to `woodpecker-server` and restart it: - WOODPECKER_PLUGINS_PRIVILEGED=woodpeckerci/plugin-docker-buildx I went to Woodpecker's web interface and hit `Restart` on the failed job. It failed again... I thought I'd broken something, but a bit of searching around suggested that this was a bit of a nonsense error and that the "fix" was actually just to push a new commit. So I did that. The job got further, this time, but failed again + git fetch --no-tags --depth=1 --filter=tree:0 origin +cb7487b3d3b07c4e77392522c1a97512a9a5a8d7: fatal: could not read Username for 'https://gl.example.com': No such device or address exit status 128 The error made it clear that there was some authentication required, but I _thought_ that that was all supposed to be magically handled in the background. The issue turned out to be the gitlab project's visibility setting: The result of this was that Woodpecker could see the repo in API calls but didn't know that it needed to authenticate. If the visibility had been `Private` or `Public`, it would have just worked. I also needed to turn on support for HTTPS cloning3 (`Admin` -> `Settings` -> `General`): To ensure it could handle `Internal` repos, I added a new env variable to `woodpecker-server`: - WOODPECKER_AUTHENTICATE_PUBLIC_REPOS=true After restarting the container and pushing a new commit, CI built a new image and pushed it to my custom registry. Finally, I logged into Woodpecker's UI and created a cron called `Daily`: Every night since, the image has been rebuilt and pushed. * * * ##### Other Registries I won't go into as much depth here because most of it is the same regardless of the registry that you're pushing to. I decided to enable periodic rebuilds for some of my other images, but those are published on external registries: * Docker Hub * Github Container Registry * Codeberg Registry The thing that each of these has in common, of course, is that they require authentication to push. Unsurprisingly, the build plugin has support for auth. I created secrets in Woodpecker (the option is under the settings cog in each repo): Secrets can then be referenced under the `settings` attribute: steps: - name: publish image: woodpeckerci/plugin-docker-buildx settings: repo: ghcr.io/bentasker/soliscloud-inverter-control registry: ghcr.io platforms: linux/amd64,linux/arm64/v8 tags: latest username: from_secret: github_username password: from_secret: github_token The only real difference for Docker Hub is that you store a password rather than a token. * * * ##### Additional Settings There are a couple of additional settings that I've since used which aren't mentioned above * `auto_tag`: enables automatic tag calculation. If I push a (git) tag of `v1.1.1`, CI will automatically tag the container with `v1.1.1`, `v1.1`, `v1` along with whatever I've specified in `tags` * `when`: Allows you to filter events at a per-pipeline step level. I've used this to have one build definition but change the destination tags based on whether the build is the result of a Cronjob or a push. There's an example of using both in my soliscloud inverter control repo (that repo automatically mirrors into my Gitlab, allowing Woodpecker to act upon it). * * * ##### Automatically Rolling Containers Although Woodpecker was periodically rebuilding images, the systems _consuming_ those images still needed to be configured to periodically pull new versions. `docker compose` _does_ have support for time based pull policies: * `daily`: Compose checks the registry for image updates if the last pull took place more than 24 hours ago. * `weekly`: Compose checks the registry for image updates if the last pull took place more than 7 days ago. * `every_<duration>`: Compose checks the registry for image updates if the last pull took place before `<duration>`. Duration can be expressed in weeks (`w`), days (`d`), hours (`h`), minutes (`m`), seconds (`s`) or a combination of these. However, although these _sound_ ideal, it appears that they only apply when containers are restarted or recreated, so extra orchestration would still be required. Instead, I looked at using Watchtower. However, my initial attempts to use that failed because the embedded docker client was too old: Error response from daemon: client version 1.25 is too old. Minimum supported API version is 1.44, please upgrade your client to a newer version" That message led me to a Github issue full of people complaining of the same thing and noting that the project seemed to be unmaintained (since then, the project has actually been archived and a goodbye note posted - I assume someone went and bugged the maintainer about the breakage). Although the Github issue contained a workaround, I decided that I wanted something that's actively maintained and so settled on Nick Fedor's fork. The box that runs `gonic-lastfm-sync` also runs some other containers that I didn't want automatically restarting. Because I also run Rancher's k3d on that system, some of those containers can't easily have custom labels added to them. Although it's possible to pass `watchtower` an explicit list of containers to monitor, I decided to rely on labels instead (the idea being that I _should_ be less likely to forget when making changes). I added a label to `gonic_sync`: gonic_sync: restart: always image: registry.example.com/utilities/gonic-last-fm-sync-docker:latest container_name: gonic-lastfm-sync labels: - "com.centurylinklabs.watchtower.enable=true" environment: - GONIC_GONIC_USERNAME=ben volumes: - /home/ben/docker_files/files/gonic/data:/data I then stood `watchtower` up with an environment variable to tell it to only restart appropriately labelled containers: watchtower: image: nickfedor/watchtower container_name: watchtower environment: # Only restart containers with enable set # to true - WATCHTOWER_LABEL_ENABLE=true # This is actually the default # but I don't want to have to remember that - WATCHTOWER_POLL_INTERVAL=86400 volumes: - /var/run/docker.sock:/var/run/docker.sock * * * ### Telegraf Everything seemed to be up and running, but I realised that there _was_ a small issue (in fact, more _a risk_ , really). I use Telegraf and InfluxDB to monitor my kit. Although InfluxDB 3 significantly reduces cardinality concerns, I'm still running the older versions which can suffer more with extremely high cardinality datasets. Woodpecker CI creates a dedicated container for each pipeline step, using a UUID as part of the container name. So that I can monitor resource usage, I have Telegraf's docker input plugin configured to collect metrics: [[inputs.docker]] endpoint = "unix:///var/run/docker.sock" timeout = "5s" interval = "5m" This resulted in series appearing in InfluxDB with quite meaningless container names: The metrics themselves _weren't_ meaningless: I **wanted** to be able to see what resources CI was consuming, but I also didn't want (or need) an individual series per pipeline step invocation. I _definitely_ didn't need any cardinality concerns that might follow as a result. So, I decided to use the Starlark processor plugin to squash those container names down into a placeholder. For those who aren't familiar with Starlark, it's essentially a dialect of Python: def apply(metric): # Rewrite worker container names if "container_name" in metric.tags and metric.tags["container_name"].startswith("wp_"): metric.tags["container_name"] = "woodpecker_worker_container" # Also strip the wp_uuid tag if "wp_uuid" in metric.tags: metric.tags.pop("wp_uuid") return metric The entry in Telegraf's config file looked like this: [[processors.starlark]] # jira-projects/LAN#248 # Don't let woodpecker ephemeral containers # blow up cardinality namepass = ["docker*"] source = ''' def apply(metric): if "container_name" in metric.tags and metric.tags["container_name"].startswith("wp_"): metric.tags["container_name"] = "woodpecker_worker_container" # Also strip the wp_uuid tag if "wp_uuid" in metric.tags: metric.tags.pop("wp_uuid") return metric ''' The next time that a CI job ran, the new placeholder name appeared in InfluxDB: As a solution, this really isn't perfect: InfluxDB uses tag values to build the series key and multiple writes into the same series with the same timestamp will overwrite one another. This change removed a unique identifier. While that reduces cardinality growth, doing so also increases the potential for collision if two CI jobs run at the same time. However, there are probably still enough distinguishing tags left, including: * `container_image`: the image being used by that step (e.g. `woodpeckerci/plugin-docker-buildx`) * `wp_step`: The name given to the step in YAML (e.g. `publish`) For a collision to occur, CI would need to be running exactly the same container image, with exactly the same step name in two jobs at the same time. That's not impossible but (at the scale that I'm using it) _reasonably_ improbable. If I ever reached the scale that it was more likely, distributing jobs across hardware would help (because the `hostname` tag would then differ). They're really not the most interesting of graphs, but the metrics allow me to visualise the RAM, CPU and Network that pipeline steps are consuming: * * * ### Conclusion Going into this, I think I overestimated how complex hooking Woodpecker CI up would be. Although I ran into some (quite minor) headaches, it really didn't take long at all, leaving me to figure out what I was actually _going to do_ with the system. Since then, I've set up automated (and scheduled) rebuilds of some container images, pushing the result to a range of container registries. I've also set up git-ops for one of my Static Sites but, as this post is already quite long, I'll probably write about that separately. One of the other benefits that I haven't mentioned so far, is that it's allowed me to (easily) start cross building container images. With no extra hassle for me, images like soliscloud-inverter-control can now be run on ARM as well as on x86_64. So far, the system seems to be reliable and dependable4 although I'm not _exactly_ pushing it hard. * * * 1. I've since decided that this was probably a mistake, so may well change it over the Xmas period ↩ 2. Disclosure: I work for Chainguard but I'm not trying to sell you anything and any views expressed here are my own and not necessarily that of my employer ↩ 3. I'm not actually sure if this was off by default or if I've previously disabled it ↩ 4. I _may_ have joked to someone this week that Github's 39% reduction in Actions prices was as a result of them tying pricing to reliability and performance. ↩

New #Blog: Playing Around With Woodpecker CI
Author: Ben Tasker

www.bentasker.co.uk/posts/blog/general/playi...

#containers #docker #general #gitlab #telegraf #watchtower #woodpecker-ci

2 0 0 0
Post image

Die Schreibmaschine im Depeschendienst
• Das Neue Universum • 1895 •
🌐 epilog.de/5211

#fernschreiber #historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #zeitreisen

0 0 0 0
Post image

InfluxDB + Telegraf: Build a Complete Metrics Pipeline for Your Infrastructure If you're still using Prometheus for everything, you're probably overcomplicating your life. Don't get me ...

#InfluxDB #Telegraf #Monitoring #DevOps #Self-Hosted #Time-Series #Metrics

Origin | Interest | Match

0 0 0 0
Post image

Hans Dominik: Drahtlose Telegrafie im Eisenbahnbetrieb
• Die Woche • 28.7.1906 •
🌐 epilog.de/4430

#eisenbahn #verkehrsgeschichte #historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #technikgeschichte #zeitreisen

0 0 0 0
Post image

Kabel-Schädigungen durch Insektenfraß
• Prometheus • 3.3.1897 •
🌐 epilog.de/4795

#insektenfrass #historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #zeitreisen

0 0 0 0
Preview
Monitoring a UPS with Telegraf and Grafana Our power supply is normally pretty reliable, but last week we had a an outage. Although we've got solar, we don't (currently) have an islanding switch, so when the grid goes down, so do we. This power outage only lasted about 45 minutes, but came at a _really_ bad time: I was due to be interviewing someone, so had to try and get signal so that I could _at least_ send a SMS and tell them that we'd need to re-schedule. I _used_ to have a UPS, but didn't replace it after the battery reached end-of-life - at the time we had a young child in the house, so having something be persistently energised seemed like quite a bad idea. That's no longer a concern though, so I decided that it was time to plug important things (laptop, switch router etc) into a UPS - partly to protect them from damage, but also so that there's something that I can _do_ during an outage (this week, I couldn't do much more than sit and work my way through a Toblerone). This post details the process of installing Network UPS Tools (NUT) and configuring Telegraf to collect metrics from it, allowing graphing and alerting in Grafana. * * * ### The UPS It doesn't matter _too much_ what model of UPS you have, NUT supports a wide range of kit. Mine has a USB connection, so we're using NUT's `usbhid` support. My UPS is a Powerwalker VI Series UPS and shows up in `lsusb` like this Bus 006 Device 015: ID 0764:0601 Cyber Power System, Inc. PR1500LCDRT2U UPS The UPS has 4 mains plug sockets on the back, so I've got a few things plugged in: * My router/firewall (our fiber ONT is in a different room and has it's own battery backup) * My main switch * My NAS * An external HDD array * The extension lead which runs my desk Running my desk means that it has to power a couple of monitors **and** a couple of laptops. This isn't _quite_ as bad as it sounds though: * If I'm not at my desk, the monitors will be off and the laptops will be (relatively) idle * If _I am_ at my desk, the plan is to unplug the laptops and have them run off battery so that they're not using the UPS's capacity * * * ### NUT setup #### Installing NUT is in the Ubuntu repos, so: sudo apt update sudo apt install nut nut-client nut-server Next we confirm that NUT can actually see the UPS: sudo nut-scanner -U If all is well, this'll write out a config block: [nutdev1] driver = "usbhid-ups" port = "auto" vendorid = "0764" productid = "0601" product = "2200" serial = "11111111111111111111" vendor = "1" bus = "006" We need to write that into NUT's config, so invoke again but redirect: sudo nut-scanner -UNq 2>/dev/null | sudo tee -a /etc/nut/ups.conf The name `nutdev1` isn't _particularly_ informative, though, so we can also hand edit `ups.conf` to change it (and add a `desc` attribute to provide a description of the UPS): sudo nano /etc/nut/ups.conf I set mine like this: [deskups] desc = "Cyber Power System UPS" driver = "usbhid-ups" port = "auto" vendorid = "0764" productid = "0601" product = "2200" serial = "11111111111111111111" vendor = "1" bus = "006" Make a note of the name (the bit in square brackets), we'll need it shortly. * * * #### Setting Up For Monitoring Next we want to set up credentials for NUT server I used my `gen_passwd` utility to generate a random password, but use whatever method suits you: NUT_PW=`gen_passwd 24 nc` Create the user: echo -e "\n[monitor]\n\tpassword = ${NUT_PW}\n\tupsmon master\n" | sudo tee -a /etc/nut/upsd.users Now provide the credentials to `upsmon`, change the value of `UPS_NAME` to match the one that you set for the UPS in `ups.conf` earlier: # Change to match the name in ups.conf UPS_NAME="deskups" echo -e "\nMONITOR $UPS_NAME@localhost 1 monitor $NUT_PW master\n" | sudo tee -a /etc/nut/upsmon.conf Keep a note of the UPS name and password, we'll need it again when configuring `telegraf`. Configure NUT to run as a netserver (so that Telegraf can talk to it): sudo sed -e 's/MODE=none/MODE=netserver/' -i /etc/nut/nut.conf Restart services: for i in nut-server nut-client nut-driver nut-monitor do sudo systemctl restart $i done Confirm that nutserver is listening: $ sudo netstat -lnp | grep 3493 tcp 0 0 127.0.0.1:3493 0.0.0.0:* LISTEN 3854210/upsd tcp6 0 0 ::1:3493 :::* LISTEN 3854210/upsd Check that we get data back about the UPS: upsc $(upsc -l 2>/dev/null) 2>/dev/null If all is well, we're ready to move onto collecting data. * * * ### Collection and Visualisation With NUT now able to report on the UPS, the next step is to have that data collected so that we can visualise it and (optionally) alert based upon it. * * * #### Telegraf We're going to use the upsd input plugin to talk to NUT. This was introduced in Telegraf v1.24.0 so, if you're using an existing install, make sure that your `telegraf` is recent enough: telegraf version If you don't have Telegraf, there are install instructions here (note: you're also going to want an InfluxDB instance or free cloud account because the Dashboard that we'll use for visualisation uses Flux). The input plugin is pretty simple to configure, append the following to `/etc/telegraf/telegraf.conf`: [[inputs.upsd]] ## A running NUT server to connect to. ## IPv6 addresses must be enclosed in brackets (e.g. "[::1]") server = "127.0.0.1" port = 3493 # The values for these are found in /etc/nut/upsmon.conf username = "deskups@localhost" password = "[redacted]" additional_fields = ["*"] # Map enum values according to given table. ## ## UPS beeper status (enabled, disabled or muted) ## Convert 'enabled' and 'disabled' values back to string from boolean [[processors.enum]] [[processors.enum.mapping]] field = "ups_beeper_status" [processors.enum.mapping.value_mappings] true = "enabled" false = "disabled" After restarting (or reloading) `telegraf`, you should start to see metrics appearing in InfluxDB: * * * #### Visualisation I use Grafana for visualisation and, conveniently, there was already a community dashboard (the source for which can be found on Github). On the community page Click `Download JSON`. Then, in Grafana * `New Dashboard` * `Import JSON` * Drag the JSON file over You'll be presented with a set of options for the Dashboard - choose the relevant InfluxDB datasource to query against: You'll then be taken to the dashboard itself. It's quite likely that the dashboard will be broken - by default it looks for a bucket called `upsd-Telegraf` (I write into a bucket called `telegraf`). To fix it * `Settings` * `Variables` * `bucket` Scroll down to find `Values seperated by comma` and change it to contain the name of your bucket Click `Back to Dashboard` and the dashboard should now load: I already track electricity costs, plus we're on a 30 minute tariff, so I also edited the dashboard to remove the cost related row (and then the associated variables). * * * #### Alerting The `upsd` measurement contains a field called `ups_status` which will normally be `OL` (online). If the mains cuts out (or someone unplugs it to test behaviour...) the value will change to report that the UPS is running from battery: Note: The new state `OB DISCHRG` isn't actually a single status, it's reporting two (closely related) status flags. After power is restored, the UPS reports itself back online _but_ also notes that the battery is now charging: This means that creating an alert is **not** as simple as `if r.ups_status != "OL"`. I also only _really_ wanted an email notification to warn me of the following status symbols: * We're running from battery (flag: `OB`) * The UPS is reporting an alarm (flag: `ALARM`) * The UPS is reporting that the battery charge is too low (flag: `LB`) * The UPS is reporting overload (flag: `OVER`) * The UPS requires battery replacement (flag: `RB`) RFC 9271 is quite well designed in that no defined symbol exists as a sub-string of another, so we can safely do something like: for flag in ["OB", "ALARM", "LB", "OVER", "RB"]: if flag in ups.status: alarm() Of course, to do that with Grafana's alerting we need to translate the logic into a Flux query: // Define the regex to use when checking for alertable states alarm_regex = /(OB|LB|OVER|RB|ALARM)/ // Extract reported status from(bucket: "telegraf") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r["_measurement"] == "upsd") |> filter(fn: (r) => r["_field"] == "ups_status") |> group(columns: ["ups_name", "_field"]) |> keep(columns: ["_time", "_value", "_field", "ups_name"]) |> aggregateWindow(every: 1m, fn: last, createEmpty: false) // Identify whether the status contains any flags of concern // Grafana alerting requires the main column to be numeric // so we need to shuffle things around |> map(fn: (r) => ({ _time: r._time, //flags: r._value, ups_name: r.ups_name, _value: if r._value =~ alarm_regex then 1 else 0 })) |> group(columns: ["ups_name"]) The return values of this query are based on whether any of the problematic flags exist - if they don't, it'll return 0, if they do the value will be 1. This allows use of a simple threshold in the grafana alerting config: With the alert saved, I unplugged the UPS and waited: A minute later, the alert was escalated to Pagerduty: A couple of minutes after plugging the UPS back in, the alert recovered. * * * ### Conclusion Setting up monitoring of the UPS was pretty easy - NUT supports a wide range of devices and exposes status in a standardised way. NUT is well supported by Telegraf and there was _already_ a community dashboard available to visualise UPS status. This means that, in practice, the hardest part of all of this was fishing the relevant power leads out of the rack to plug into the back of the UPS. Now, if the power fails, I _should_ (depending on whether our fiber connection is still lit up) get a page to warn me. Either way, the UPS will provide some coverage for small outages.

New #Documentation: Monitoring a UPS with Telegraf and Grafana
Author: Ben Tasker

www.bentasker.co.uk/posts/documentation/linu...

#alerting #electricity #grafana #monitoring #telegraf #ups

1 0 0 0
Post image

Der optische Telegraf von Claude Chappe
• Die Welt der Technik • 15.10.1906 •
🌐 epilog.de/3066

#historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #zeitreisen

0 0 0 0
Preview
GitHub - lephisto/grafana-fluxlang-deadman: Deadman Alert Query for Fluxlang solely relying on Grafana Deadman Alert Query for Fluxlang solely relying on Grafana - lephisto/grafana-fluxlang-deadman

Want a reliable Deadman Alert in Grafana for your Influx Data and not bother with the Influx Internal Alerting Engine?

https://github.com/lephisto/grafana-fluxlang-deadman

1 0 0 0
Post image

Die Drahtlose Telefonie
• Berliner Tageblatt • 15.3.1903 •
🌐 epilog.de/5111

#hansdominik #historischertext #geschichte #damals #telegrafie #telegraf #telekommunikation #zeitreisen

0 0 0 0
Post image Post image Post image Post image

Wat zei Wouter de Winther gisteren bij #Eva ? Zelfs Jort Kelder zegt dat de PVV op de man speelt. Met je kut @telegraaf.nl #telegraf

6 0 1 1
Preview
InfluxData InfluxData provides the leading time series platform to instrument, observe, learn and automate any system, application and business process across a variety of use cases.

The latest update for #InfluxDB includes "How Nexus BMS Uses Time Series and AI to Power Smarter Buildings" and "Building Real-Time Data Pipelines with Kafka, #Telegraf, and InfluxDB 3".

#monitoring #devops #timeseries https://opsmtrs.com/2W5CAx0

0 0 0 0
Preview
InfluxData InfluxData provides the leading time series platform to instrument, observe, learn and automate any system, application and business process across a variety of use cases.

The latest update for #InfluxDB includes "Building Real-Time Data Pipelines with Kafka, #Telegraf, and InfluxDB 3" and "Future-Proofing Your Historian with a Time Series Database".

#monitoring #devops #timeseries https://opsmtrs.com/2W5CAx0

0 0 0 0
Post image

In der Zentral-Telegrafenstation des kgl. Polizei-Präsidiums in Berlin 1908.

ℹ️ epilog.de/1750

#historischesbild #geschichte #telegrafenamt #telegrafie #telegraf #telekommunikation #damals #zeitreisen

0 0 0 0
Post image

Das Post- und Telegrafengebäude in Osnabrück
• Zentralblatt der Bauverwaltung • 19.5.1883 •
🌐 epilog.de/6433

#osnabrück #historischertext #geschichte #damals #architekturgeschichte #architektur #telegrafenamt #telegrafie #telegraf #telekommunikation #postwesen #post #verkehrsgeschichte #zeitreisen

0 0 0 0
Preview
Telegraf Network Monitoring Watch on the Rawkode Academy

🚀 Telegraf Network Monitoring: Still relevant today! David Flanagan (@rawkode.dev) dives into monitoring with Telegraf. Check out this insightful video! 🔍 #Monitoring #Telegraf

0 0 0 0

I've essentially wasted the entire day messing around with #Blocky, setting up #Telegraf and building a dashboard from the Blocky data. I definitely did NOT need to be working on that... :nkoCrying: :nkoFacePalm:

#data #visualization #Grafana #Prometheus #InfluxDB #ADHD

1 0 0 0