<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>tchncs</title>
    <link>https://text.tchncs.de/tchncs/</link>
    <description>News about the tchncs.de service.</description>
    <pubDate>Sun, 12 Apr 2026 12:33:45 +0000</pubDate>
    <item>
      <title>About the Matrix incident on July 26 2023</title>
      <link>https://text.tchncs.de/tchncs/about-the-matrix-incident-on-july-26-2023</link>
      <description>&lt;![CDATA[About the Matrix incident on July 26 2023&#xA;&#xA;What happened&#xA;&#xA;Our two NVMe storage drives have been well over 200% of the reported lifetime used. That was on purpose, to reduce e-waste. As somehow they managed to get too synchronized with it and both reported 255%, I got scared that the raid may not be as effective anymore in case of an incident. I then requested the replacement of one of the drives that had the highest RW counter – however note that there was no data integrety error and the raid was healthy.&#xA;&#xA;  We have now replaced the requested drive, however - it would seem that both of the drives have failed due to their condition. We have replaced requested drive and booted the server into the Rescue system, however we are unable to make the remaining original drive visible to the system. Even replacement of the connectors does not appear to make the drive visible.&#xA;&#xA;Sadly the hoster could neither get the the remaining drive, nor the replaced drive recognized by the os anymore and left me the server in a state where I basically only had one blank drive.&#xA;&#xA;After multiple unsuccessful attempts with kernel parameters and a while of having the server powered off and stuff, to make the rescue OS initialize the drive again, i then had to give in and requested the other drive to be replaced as well.&#xA;&#xA;[Wed Jul 26 19:19:10 2023] nvme nvme1: I/O 9 QID 0 timeout, disable controller&#xA;[Wed Jul 26 19:19:10 2023] nvme nvme1: Device shutdown incomplete; abort shutdown&#xA;[Wed Jul 26 19:19:10 2023] nvme nvme1: Removing after probe failure status: -4&#xA;As soon as the new drive was in place and i gathered everything to be able to restore from backups, I have started to set the &#34;new&#34; server up and to restore from backups. Around 1:45a, I accepted that it was bedtime.&#xA;&#xA;Next day I figured that the previous Debian installation was still on an old, now unmaintained PostgreSQL version, so I wanted to take the opportunity to upgrade to a current version. Sadly this took way too long with native tools (linking methods did not work due to incompatibility) and after around 6-7hr I have canceled the progress and will retry with 3rdparty tooling later.&#xA;&#xA;However, this Synapse setup was different from my customer servers and linked the config to /etc/synapse from the encrypted partition for some reason. You guessed it, I was backing up a symlink. That meant that ontop of the missing Synapse config, also the signing key for messages was lost and also the original webserverconfig missing. &#xA;&#xA;The state from late afternoon on July 27&#xA;After most important config parts were rebuilt, the Synapse server started with most important workers and on purpose without media repo and without appservices or presence support and tried to get back in sync with the network:&#xA;&#xA;iframe src=&#34;https://social.tchncs.de/@milan/110786503409595204/embed&#34; class=&#34;mastodon-embed&#34; style=&#34;max-width: 100%; border: 0&#34; height=250 width=700 allowfullscreen=&#34;allowfullscreen&#34;/iframescript src=&#34;https://social.tchncs.de/embed.js&#34; async=&#34;async&#34;/script&#xA;&#xA;I then had a little fight with the media repository which uses 3rdparty software. Functionallity has been restored, though for a yet unclear reason, some image resolutions for room/people avatars are still missing. Also here the original config was gone.&#xA;&#xA;Later that night&#xA;Around midnight, Whatsapp, Telegram and Signal bridges have returned.&#xA;&#xA;July 28&#xA;&#xA;Current state is that we (the community and me) are waiting for outgoing federation to return to normal. Some servers already work fine, others still don&#39;t and Matrix.org only from time to time. Please have patience, it is expected to sort itself out within the next hours. Right now we assume the delays / missing outgoing federation is caused by the new signing key mentioned above.&#xA;Presence has returned (online status of users)&#xA;Our moderation bot has returned&#xA;As a last resort, an issue report for the federation issues has been filed.&#xA;&#xA;July 29&#xA;&#xA;around 2am, it was discovered / reproduced that the server signature-keys are not properly refreshed on remote servers and they throw errors like Signature on retrieved event $e4xQAons8TGPgR4iy4RhGRX0dfCZmRTrhdL9MoypM was invalid (unable to verify signature for sender domain tchncs.de: 401: Failed to find any key to satisfy. It&#39;s a good thing to have at least some certainty. Still hoping for help on Github while looking for options.&#xA;external login providers have been added again&#xA;most media issues (loading small versions of images such as avatars) should be resolved&#xA;&#xA;hr&#xD;&#xA;How to contact me:  &#xD;&#xA;Follow me on Mastodon / More options on tchncs.de]]&gt;</description>
      <content:encoded><![CDATA[<h3 id="about-the-matrix-incident-on-july-26-2023">About the Matrix incident on July 26 2023</h3>

<h4 id="what-happened"><strong>What happened</strong></h4>

<p>Our two NVMe storage drives have been well over 200% of the reported lifetime used. That was on purpose, <strong>to reduce e-waste</strong>. As somehow they managed to get too synchronized with it and both reported 255%, I got scared that the raid may not be as effective anymore in case of an incident. I then requested the replacement of one of the drives that had the highest RW counter – however <strong>note that there was no data integrety error and the raid was healthy.</strong></p>

<blockquote><p>We have now replaced the requested drive, however – it would seem that both of the drives have failed due to their condition. We have replaced requested drive and booted the server into the Rescue system, however we are unable to make the remaining original drive visible to the system. Even replacement of the connectors does not appear to make the drive visible.</p></blockquote>

<p><strong>Sadly the hoster could neither get the the remaining drive, nor the replaced drive recognized by the os anymore</strong> and left me the server in a state where I basically only had one blank drive.</p>

<p>After multiple unsuccessful attempts with kernel parameters and a while of having the server powered off and stuff, to make the rescue OS initialize the drive again, i then had to give in and requested the other drive to be replaced as well.</p>

<pre><code>[Wed Jul 26 19:19:10 2023] nvme nvme1: I/O 9 QID 0 timeout, disable controller
[Wed Jul 26 19:19:10 2023] nvme nvme1: Device shutdown incomplete; abort shutdown
[Wed Jul 26 19:19:10 2023] nvme nvme1: Removing after probe failure status: -4
</code></pre>

<p>As soon as the new drive was in place and i gathered everything to be able to restore from backups, I have started to set the “new” server up and to restore from backups. Around 1:45a, I accepted that it was bedtime.</p>

<p><strong>Next day</strong> I figured that the previous Debian installation was still on an <strong>old, now unmaintained PostgreSQL version</strong>, so I wanted to take the opportunity to <strong>upgrade to a current version</strong>. Sadly this took way too long with native tools (linking methods did not work due to incompatibility) and after around 6-7hr I have canceled the progress and will retry with 3rdparty tooling later.</p>

<p>However, this Synapse setup was different from my customer servers and linked the config to <code>/etc/synapse</code> from the encrypted partition for some reason. <strong>You guessed it, I was backing up a symlink.</strong> That meant that ontop of the missing Synapse config, also the signing key for messages was lost and also the original webserverconfig missing.</p>

<h4 id="the-state-from-late-afternoon-on-july-27">The state from late afternoon on July 27</h4>

<p>After most important config parts were rebuilt, the Synapse server started with most important workers and on purpose without media repo and without appservices or presence support and tried to get back in sync with the network:</p>

<p><iframe src="https://social.tchncs.de/@milan/110786503409595204/embed" class="mastodon-embed" style="max-width: 100%; border: 0" height="250" width="700" allowfullscreen="allowfullscreen"></iframe></p>

<p>I then had a little fight with the media repository which uses 3rdparty software. Functionallity has been restored, though for a yet unclear reason, some image resolutions for room/people avatars are still missing. Also here the original config was gone.</p>

<h4 id="later-that-night">Later that night</h4>

<p>Around midnight, Whatsapp, Telegram and Signal bridges have returned.</p>

<h4 id="july-28">July 28</h4>
<ul><li>Current state is that we (the community and me) are waiting for outgoing federation to return to normal. Some servers already work fine, others still don&#39;t and Matrix.org only from time to time. <strong>Please have patience, it is expected to sort itself out within the next hours.</strong> Right now we assume the delays / missing outgoing federation is caused by the new signing key mentioned above.</li>
<li>Presence has returned (online status of users)</li>
<li>Our moderation bot has returned</li>
<li>As a last resort, <a href="https://github.com/matrix-org/synapse/issues/16025" rel="nofollow">an issue report for the federation issues has been filed</a>.</li></ul>

<h4 id="july-29">July 29</h4>
<ul><li>around 2am, it was discovered / reproduced that the server signature-keys are not properly refreshed on remote servers and they throw errors like <code>Signature on retrieved event $e4xQAons8TGPgR4iy4RhGRX_0_dfCZmRTrhdL9MoypM was invalid (unable to verify signature for sender domain tchncs.de: 401: Failed to find any key to satisfy</code>. It&#39;s a good thing to have at least some certainty. Still hoping for help on Github while looking for options.</li>
<li>external login providers have been added again</li>
<li>most media issues (loading small versions of images such as avatars) should be resolved</li></ul>

<hr>

<p><strong>How to contact me:</strong><br>
<a href="https://social.tchncs.de/@milan" rel="nofollow"><em>Follow me on Mastodon</em></a> / <a href="https://tchncs.de/contact" rel="nofollow"><em>More options on tchncs.de</em></a></p>
]]></content:encoded>
      <guid>https://text.tchncs.de/tchncs/about-the-matrix-incident-on-july-26-2023</guid>
      <pubDate>Fri, 28 Jul 2023 11:58:13 +0000</pubDate>
    </item>
    <item>
      <title>Our new OpenTalk test-instance</title>
      <link>https://text.tchncs.de/tchncs/our-new-opentalk-test-instance</link>
      <description>&lt;![CDATA[OpenTalk Screenshot&#xA;&#xA;div class=center&#xA;a href=&#34;https://talk.tchncs.de&#34; target=blank rel=nofollow  class=btnTry OpenTalk/a – a href=&#34;https://opentalk.eu/produkt&#34; target=blank rel=nofollow Explore features/a – a href=&#34;https://social.tchncs.de/@milan/110012195431742828&#34; target=blank rel=nofollowDiscuss on Mastodon/a&#xA;/div&#xA;&#xA;Known issues&#xA;Speedtest not yet implemented&#xA;Metrics not yet implemented (affects you in the sense of: to see whether it&#39;s annoying to you if i restart the service)&#xA;Speedtest not supporting latency (not a direct OpenTalk issue, the speedtest-software does not support it yet)&#xA;Protocol PDF export only works from within the pad&#xA;Recordings not yet opensourced, the menuentry will not work&#xA;no proper Email support yet&#xA;phone calls feature not addressed yet, but planned&#xA;&#xA;Login / signup&#xA;&#xA;OpenTalks authentication service (Keycloak) is connected to our authentication server (Zitadel). Until recently, you had to make sure to use the button below the login form, now it is hidden with CSS. Please remember that your accountname ends with @tchncs.de.&#xA; &#xA;About this instance&#xA;Due to the complexity of OpenTalk, this instance derivates off of their official &#34;lite&#34; Docker example. It adds a number of services, trying to reach an as complete as possible experience. As of the time of writing, this is still in progress and a few more restarts are to be expected in order to apply new settings and stuff.&#xA;&#xA;  Maintenance window:&#xA;  because it doesn&#39;t make sense to restart all the time while you are trying to give it a fair test, I will try to apply new settings only between 5-9pm CET. (see below)&#xA;&#xA;div class=center&#xA;a href=&#34;https://talk.tchncs.de&#34; target=blank rel=nofollow  class=btnTry OpenTalk/a – a href=&#34;https://opentalk.eu/produkt&#34; target=blank rel=nofollow Explore features/a – a href=&#34;https://social.tchncs.de/@milan/110012195431742828&#34; target=_blank rel=nofollowDiscuss on Mastodon/a&#xA;/div&#xA;&#xA;This instance is categorized as &#34;playground mode&#34;. Its purpose is to evaluate whether it is feasable to keep it long-term. This also means that you still are more than welcome to use and test it, because software that is not used by anybody can&#39;t be tested/evaluated properly.&#xA;&#xA;  Playground mode window:&#xA;  This service will be evaluated until mid of June 2023&#xA;&#xA;About OpenTalk&#xA;&#xA;tba (irrational happy &#34;Rust&#34; noises)&#xA;&#xA;#tchncs #opentalk #playground&#xA;&#xA;hr&#xD;&#xA;How to contact me:  &#xD;&#xA;Follow me on Mastodon / More options on tchncs.de]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://f2.tchncs.de/media_attachments/files/110/011/868/607/663/245/original/5656ca107504e7c8.jpeg" alt="OpenTalk Screenshot"></p>

<div class="center">
<a href="https://talk.tchncs.de" target="_blank" class="btn" rel="nofollow noopener">Try OpenTalk</a> – <a href="https://opentalk.eu/produkt" target="_blank" rel="nofollow noopener">Explore features</a> – <a href="https://social.tchncs.de/@milan/110012195431742828" target="_blank" rel="nofollow noopener">Discuss on Mastodon</a>
</div>

<h2 id="known-issues">Known issues</h2>
<ul><li><del>Speedtest not yet implemented</del></li>
<li><del>Metrics not yet implemented (affects you in the sense of: to see whether it&#39;s annoying to you if i restart the service)</del></li>
<li>Speedtest not supporting latency (not a direct OpenTalk issue, the speedtest-software does not support it yet)</li>
<li><del>Protocol PDF export only works from within the pad</del></li>
<li>Recordings not yet opensourced, the menuentry will not work</li>
<li>no proper Email support yet</li>
<li>phone calls feature not addressed yet, but planned</li></ul>

<h2 id="login-signup">Login / signup</h2>

<p>OpenTalks authentication service (Keycloak) is connected to <a href="https://tchncs.de/account" rel="nofollow">our authentication server</a> (Zitadel). Until recently, you had to make sure to use the button below the login form, now it is hidden with CSS. Please remember that your accountname ends with <code>@tchncs.de</code>.</p>

<h2 id="about-this-instance">About this instance</h2>

<p>Due to the complexity of <a href="https://opentalk.eu" rel="nofollow">OpenTalk</a>, this instance derivates off of <a href="https://gitlab.opencode.de/opentalk/ot-setup" rel="nofollow">their official “lite” Docker example</a>. It adds a number of services, trying to reach an as complete as possible experience. As of the time of writing, this is still in progress and a few more restarts are to be expected in order to apply new settings and stuff.</p>

<blockquote><p><strong>Maintenance window:</strong>
because it doesn&#39;t make sense to restart all the time while you are trying to give it a fair test, I will try to apply new settings only between <strong>5-9pm CET</strong>. (see below)</p></blockquote>

<div class="center">
<a href="https://talk.tchncs.de" target="_blank" class="btn" rel="nofollow noopener">Try OpenTalk</a> – <a href="https://opentalk.eu/produkt" target="_blank" rel="nofollow noopener">Explore features</a> – <a href="https://social.tchncs.de/@milan/110012195431742828" target="_blank" rel="nofollow noopener">Discuss on Mastodon</a>
</div>

<p>This instance is categorized as <a href="https://tchncs.de/#playground" rel="nofollow">“playground mode”</a>. Its purpose is to evaluate whether it is feasable to keep it long-term. This also means that you still are more than welcome to use and test it, because software that is not used by anybody can&#39;t be tested/evaluated properly.</p>

<blockquote><p><strong>Playground mode window:</strong>
This service will be evaluated until <strong>mid of June 2023</strong></p></blockquote>

<h2 id="about-opentalk">About OpenTalk</h2>

<p>tba (irrational happy “Rust” noises)</p>

<p><a href="/tchncs/tag:tchncs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">tchncs</span></a> <a href="/tchncs/tag:opentalk" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">opentalk</span></a> <a href="/tchncs/tag:playground" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">playground</span></a></p>

<hr>

<p><strong>How to contact me:</strong><br>
<a href="https://social.tchncs.de/@milan" rel="nofollow"><em>Follow me on Mastodon</em></a> / <a href="https://tchncs.de/contact" rel="nofollow"><em>More options on tchncs.de</em></a></p>
]]></content:encoded>
      <guid>https://text.tchncs.de/tchncs/our-new-opentalk-test-instance</guid>
      <pubDate>Mon, 13 Mar 2023 10:55:33 +0000</pubDate>
    </item>
    <item>
      <title>Our new BookWyrm test-instance</title>
      <link>https://text.tchncs.de/tchncs/our-new-bookwyrm-test-instance</link>
      <description>&lt;![CDATA[Update&#xA;&#xA;The evaluation period has completed. The instance will stay.&#xA;&#xA;hr&#xA;&#xA;Say hi to a new and exciting service at tchncs.de – well – for now. :)&#xA;&#xA;  Due to previous mistakes, I have decided to declare testing-periodes for new services, before they will be added to the portfolio long-term. In this case i went with one month, meaning until April 6th &#39;23.&#xA;&#xA;BookWyrm makes a good first impression and was highly voted for in our new survey. You can track, rate, discuss and share books you are reading. It even is possible to link sources of books. All this while being part of the fediverse like Mastodon!&#xA;&#xA;Sounds great? Wonderful:&#xA;&#xA;div class=center&#xA;a href=&#34;https://tomes.tchncs.de&#34; target=blank rel=nofollow  class=btnExplore BookWyrm/a – a href=&#34;https://joinbookwyrm.com&#34; target=blank rel=nofollow Explore features/a – a href=&#34;https://social.tchncs.de/@milan/109977896011240584&#34; target=_blank rel=nofollowDiscuss on Mastodon/a&#xA;/div&#xA;&#xA;hr&#xA;I have requested an invite but received no email!?&#xA;&#xA;Please give me some time to review requests and send invites. BookWyrm does not send an email until the actual invite. If it takes multiple days, please contact me directly or check your spam. 😇&#xA;&#xA;How final is this setup?&#xA;&#xA;Well it looks fine so far but is fairly fresh and since it is not a simple Docker install, it is possible that there are small mistakes that still need fixing. Note that bookwyrm.social appears to be very slow right now, which causes book imports to fail (if you use this domain as a source. there are more options!).&#xA;&#xA;What if it does not qualify during testing?&#xA;&#xA;In that case, I will give users enough time to look for a new instance and publish an announcement on the instance, as well as on this article. &#xA;&#xA;What if it does qualify?&#xA;&#xA;In that case ... well ... it will just continue running and descriptions will be updated accordingly.&#xA;&#xA;#tchncs #bookwyrm #playground&#xA;&#xA;hr&#xD;&#xA;How to contact me:  &#xD;&#xA;Follow me on Mastodon / More options on tchncs.de]]&gt;</description>
      <content:encoded><![CDATA[<h3 id="update">Update</h3>

<p>The evaluation period has completed. The instance will stay.</p>

<hr>

<p>Say hi to a new and exciting service at <a href="https://tchncs.de" rel="nofollow">tchncs.de</a> – well – for now. :)</p>

<blockquote><p>Due to previous mistakes, I have decided to declare testing-periodes for new services, before they will be added to the portfolio long-term. In this case i went with <strong>one month</strong>, meaning until April 6th &#39;23.</p></blockquote>

<p><strong>BookWyrm</strong> makes a good first impression and was highly voted for <a href="https://cloud.tchncs.de/apps/forms/s/wyGfLHtWr2RazYKWsRkAj4Zf" rel="nofollow">in our new survey</a>. You can track, rate, discuss and share books you are reading. It even is possible to link sources of books. All this while being part of the fediverse like Mastodon!</p>

<p>Sounds great? Wonderful:</p>

<div class="center">
<a href="https://tomes.tchncs.de" target="_blank" class="btn" rel="nofollow noopener">Explore BookWyrm</a> – <a href="https://joinbookwyrm.com" target="_blank" rel="nofollow noopener">Explore features</a> – <a href="https://social.tchncs.de/@milan/109977896011240584" target="_blank" rel="nofollow noopener">Discuss on Mastodon</a>
</div>

<hr>

<h2 id="i-have-requested-an-invite-but-received-no-email">I have requested an invite but received no email!?</h2>

<p>Please give me some time to review requests and send invites. BookWyrm does not send an email until the actual invite. If it takes multiple days, please contact me directly or check your spam. 😇</p>

<h2 id="how-final-is-this-setup">How final is this setup?</h2>

<p>Well it looks fine so far but is fairly fresh and since it is not a simple Docker install, it is possible that there are small mistakes that still need fixing. Note that bookwyrm.social appears to be very slow right now, which causes book imports to fail (if you use this domain as a source. there are more options!).</p>

<h2 id="what-if-it-does-not-qualify-during-testing">What if it does not qualify during testing?</h2>

<p>In that case, I will give users enough time to look for a new instance and publish an announcement on the instance, as well as on this article.</p>

<h2 id="what-if-it-does-qualify">What if it does qualify?</h2>

<p>In that case ... well ... it will just continue running and descriptions will be updated accordingly.</p>

<p><a href="/tchncs/tag:tchncs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">tchncs</span></a> <a href="/tchncs/tag:bookwyrm" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">bookwyrm</span></a> <a href="/tchncs/tag:playground" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">playground</span></a></p>

<hr>

<p><strong>How to contact me:</strong><br>
<a href="https://social.tchncs.de/@milan" rel="nofollow"><em>Follow me on Mastodon</em></a> / <a href="https://tchncs.de/contact" rel="nofollow"><em>More options on tchncs.de</em></a></p>
]]></content:encoded>
      <guid>https://text.tchncs.de/tchncs/our-new-bookwyrm-test-instance</guid>
      <pubDate>Mon, 06 Mar 2023 18:48:30 +0000</pubDate>
    </item>
    <item>
      <title>Maintenance: Peertube Media Migration</title>
      <link>https://text.tchncs.de/tchncs/maintenance-peertube-media-migration</link>
      <description>&lt;![CDATA[Started december 20, 8:30p CET, videos of the tchncs PeerTube instance are moving to a new, more flexible home. At the time of writing, we are at over 5.5 TB of video storage.&#xA;&#xA;Status updates&#xA;Dec 31, 4p:&#xA;The remaining issue is a compatibility problem with permissions set by PeerTube to the storage objects. A few videos are failing to be moved to remote storage (they fail at the last step but files are in fact moved successfully usually). You can play around with resolution to work around playback issues or try to reupload the video if it&#39;s urgent. Here is the bugreport to the issue. I am not sure why some videos work and some don&#39;t.&#xA;Dec 29, 5p:&#xA;A problem with the media proxy-server was identified. As a result, the machine is no longer starving of available bandwidth. This results in smoooother playback and overall better instance snappiness.&#xA;Dec 28, 5p:&#xA;First round done, re-initiated migration to catch and transfer failed videos due to flaky old storage backend &#xA;Dec 26, 10a:&#xA;4.9 TB of 5.5+ TB&#xA;&#xA;hr&#xA;&#xA;Benefits of the new location&#xA;higher availability and overall reliability: the old network storage became unavailable from time to time over the years, sometimes outages, sometimes maintenances.&#xA;scalability: the network storage drive has a maximum amount of storage you can rent. The new storage will not have such a restriction.&#xA;redundancy: the storage can (easier) be replicated to a different location&#xA;&#xA;Challenges / known migration issues&#xA;hidden videos: it appears that PeerTube hides videos that are pending migration&#xA;videos that failed to move: it appears that the network storage became even more unreliable during the migration. This in turn appears to cause video moving to fail from time to time. These videos will remain hidden. I will try to reinvoke the migration when the queue of pending videos to move is empty.&#xA;iframe src=&#34;https://social.tchncs.de/@milan/109563131837358117/embed&#34; class=&#34;mastodon-embed&#34; style=&#34;max-width: 100%; border: 0&#34; width=&#34;400&#34; height=&#34;510&#34; allowfullscreen=&#34;allowfullscreen&#34;/iframescript src=&#34;https://social.tchncs.de/embed.js&#34; async=&#34;async&#34;/script&#xA;&#xA;All it takes is patience&#xA;As of right now, there is no reason to worry. Everything is under control, but the process will still take a couple of days. Please be patient. 😇&#xA;&#xA;tchncs&#xA;&#xA;hr&#xD;&#xA;How to contact me:  &#xD;&#xA;Follow me on Mastodon / More options on tchncs.de]]&gt;</description>
      <content:encoded><![CDATA[<p>Started december 20, 8:30p CET, videos of the <a href="https://tube.tchncs.de" rel="nofollow">tchncs PeerTube instance</a> are moving to a new, more flexible home. At the time of writing, we are at over 5.5 TB of video storage.</p>

<h3 id="status-updates">Status updates</h3>
<ul><li><strong>Dec 31, 4p:</strong>
The remaining issue is a compatibility problem with permissions set by PeerTube to the storage objects. A few videos are failing to be moved to remote storage (they fail at the last step but files are in fact moved successfully usually). You can play around with resolution to work around playback issues or try to reupload the video if it&#39;s urgent. <a href="https://github.com/Chocobozzz/PeerTube/issues/5499" rel="nofollow">Here is the bugreport to the issue.</a> I am not sure why some videos work and some don&#39;t.</li>
<li><strong>Dec 29, 5p:</strong>
A problem with the media proxy-server was identified. As a result, the machine is no longer starving of available bandwidth. This results in smoooother playback and overall better instance snappiness.</li>
<li><strong>Dec 28, 5p:</strong>
First round done, re-initiated migration to catch and transfer failed videos due to flaky old storage backend</li>
<li><strong>Dec 26, 10a:</strong>
4.9 TB of 5.5+ TB</li></ul>

<hr>

<h3 id="benefits-of-the-new-location">Benefits of the new location</h3>
<ul><li><strong>higher availability and overall reliability:</strong> the old network storage became unavailable from time to time over the years, sometimes outages, sometimes maintenances.</li>
<li><strong>scalability:</strong> the network storage drive has a maximum amount of storage you can rent. The new storage will not have such a restriction.</li>
<li><strong>redundancy:</strong> the storage can (easier) be replicated to a different location</li></ul>

<h3 id="challenges-known-migration-issues">Challenges / known migration issues</h3>
<ul><li><strong>hidden videos:</strong> it appears that PeerTube hides videos that are pending migration</li>
<li><strong>videos that failed to move:</strong> it appears that the network storage became even more unreliable during the migration. This in turn appears to cause video moving to fail from time to time. These videos will remain hidden. I will try to reinvoke the migration when the queue of pending videos to move is empty.
<iframe src="https://social.tchncs.de/@milan/109563131837358117/embed" class="mastodon-embed" style="max-width: 100%; border: 0" width="400" height="510" allowfullscreen="allowfullscreen"></iframe></li></ul>

<h3 id="all-it-takes-is-patience">All it takes is patience</h3>

<p>As of right now, there is no reason to worry. Everything is under control, but the process will still take a couple of days. Please be patient. 😇</p>

<p><a href="/tchncs/tag:tchncs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">tchncs</span></a></p>

<hr>

<p><strong>How to contact me:</strong><br>
<a href="https://social.tchncs.de/@milan" rel="nofollow"><em>Follow me on Mastodon</em></a> / <a href="https://tchncs.de/contact" rel="nofollow"><em>More options on tchncs.de</em></a></p>
]]></content:encoded>
      <guid>https://text.tchncs.de/tchncs/maintenance-peertube-media-migration</guid>
      <pubDate>Tue, 20 Dec 2022 09:24:31 +0000</pubDate>
    </item>
  </channel>
</rss>