From 40d96753323809c8cd0502a36139814d452c3704 Mon Sep 17 00:00:00 2001
From: Azza Ahmed <a.e.ahmed@tudelft.nl>
Date: Tue, 13 May 2025 16:13:35 +0200
Subject: [PATCH] update links overall

---
 content/en/docs/about/_index.md                        |  4 ++--
 content/en/docs/about/contributors-funders.md          |  2 +-
 content/en/docs/manual/best-practices.md               |  6 +++---
 content/en/docs/manual/connecting/index.md             |  2 +-
 content/en/docs/manual/job-submission/kerberos.md      |  2 +-
 content/en/docs/manual/job-submission/priorities.md    | 10 +++++-----
 content/en/docs/manual/job-submission/slurm-basics.md  |  6 +++---
 content/en/docs/manual/software/installing-software.md |  8 ++++----
 content/en/docs/system/compute-nodes.md                |  7 +++++++
 content/en/docs/system/storage.md                      |  2 +-
 content/en/docs/system/tud-clusters.md                 |  8 ++++----
 content/en/quickstart/_index.md                        |  6 +++---
 12 files changed, 35 insertions(+), 28 deletions(-)

diff --git a/content/en/docs/about/_index.md b/content/en/docs/about/_index.md
index 0982ce5..2ed2a35 100644
--- a/content/en/docs/about/_index.md
+++ b/content/en/docs/about/_index.md
@@ -12,9 +12,9 @@ A high-performance computing (HPC) cluster is a collection of interconnected com
 
 ### What is DAIC?
 
-The Delft AI Cluster (DAIC), formerly known as INSY-HPC (or simply “HPC”), is a TU Delft high-performance computing cluster consisting of Linux [compute nodes (i.e., servers)](/docs/system/#compute-nodes) with substantial processing power and memory for running large, long, or GPU-enabled jobs.
+The Delft AI Cluster (DAIC), formerly known as INSY-HPC (or simply “HPC”), is a TU Delft high-performance computing cluster consisting of Linux [compute nodes (i.e., servers)](/docs/system/) with substantial processing power and memory for running large, long, or GPU-enabled jobs.
 
-What started in 2015 as a CS-only cluster has grown to serve researchers across many TU Delft departments. Each expansion has continued to support the needs of computer science and AI research. Today, DAIC nodes are organized into [partitions](/docs/manual/job-submission/partitions/) that correspond to the groups contributing those resources. (See [Contributing departments](/docs/introduction/contributors-funders/#contributing-departments) and [TU Delft clusters comparison](/docs/introduction/tud-clusters/).)
+What started in 2015 as a CS-only cluster has grown to serve researchers across many TU Delft departments. Each expansion has continued to support the needs of computer science and AI research. Today, DAIC nodes are organized into [partitions](/docs/manual/job-submission/priorities/#partitions) that correspond to the groups contributing those resources. (See [Contributing departments](/docs/about/contributors-funders/#contributing-departments) and [TU Delft clusters comparison](/docs/system/tud-clusters/).)
 
 {{< figure src="/img/DAIC_partitions.png" caption="DAIC partitions and access/usage best practices" ref="fig:daic_partitions" width="750px">}}
 
diff --git a/content/en/docs/about/contributors-funders.md b/content/en/docs/about/contributors-funders.md
index 788afed..0927208 100644
--- a/content/en/docs/about/contributors-funders.md
+++ b/content/en/docs/about/contributors-funders.md
@@ -87,7 +87,7 @@ The cluster is available only to users from participating departments. Access is
 
 
 {{% alert title="Note" color="primary" %}}
-To check the corresponding nodes or servers for each department, see the [Cluster Specification](/docs/system#compute-nodes) page.
+To check the corresponding nodes or servers for each department, see the [Cluster Specification](/docs/system) page.
 {{% /alert %}}
 
 
diff --git a/content/en/docs/manual/best-practices.md b/content/en/docs/manual/best-practices.md
index 34e4e78..0a5be92 100644
--- a/content/en/docs/manual/best-practices.md
+++ b/content/en/docs/manual/best-practices.md
@@ -9,9 +9,9 @@ description: >
 The available processing power and memory in DAIC is large, but still limited. You should use the available resources efficiently and fairly. This page lays out a few general principles and guidelines  for considerate use of DAIC.
 
 ## Using shared resources
-The [computing nodes](docs/system/#compute-nodes) within DAIC are primarily meant to run large, long (non-interactive) jobs. You share these resources with other users across departments. Thus, you need to be cautious of your usage so you do not hinder other users. 
+The [computing nodes](docs/system/compute-nodes) within DAIC are primarily meant to run large, long (non-interactive) jobs. You share these resources with other users across departments. Thus, you need to be cautious of your usage so you do not hinder other users. 
 
-To help protect the active jobs and resources, when a [login node](docs/system/#login-nodes) becomes overloaded, new logins to this node are automatically disabled. 
+To help protect the active jobs and resources, when a [login node](docs/system/login-nodes) becomes overloaded, new logins to this node are automatically disabled. 
 This means that you will sometimes have to wait for other jobs to finish and at other times ICT may have to kill a job to create space for other users.
 
 {{% pageinfo %}}
@@ -23,7 +23,7 @@ This means that you will sometimes have to wait for other jobs to finish and at
 ### Best practices
 * Always choose the login node with the lowest use (most importantly system load and memory usage), by checking the {{< external-link "https://login.daic.tudelft.nl/" "Current resource usage page" >}} or the `servers` command for information.
   * Each login node displays a message at login. Make sure you understand it before proceeding. This message includes the current load of the node, so look at it at every login
-* Only use the storage best suited to your files (See [Storage](/docs/system#storage)).
+* Only use the storage best suited to your files (See [Storage](/docs/system/storage)).
 
 <!--
 * ~~Automate your job.~~
diff --git a/content/en/docs/manual/connecting/index.md b/content/en/docs/manual/connecting/index.md
index 69d61fd..444d7fe 100644
--- a/content/en/docs/manual/connecting/index.md
+++ b/content/en/docs/manual/connecting/index.md
@@ -30,7 +30,7 @@ $ ssh login.daic.tudelft.nl             # If your username matches your NetID
 This will log you in into DAIC's `login1.daic.tudelft.nl` node for now. Note that this setup might change in the future as the system undergoes migration, potentially reducing the number of login nodes..
 
 {{% alert title="Note" color="info"  %}}
-Currently DAIC has 3 login nodes: `login1.daic.tudelft.nl`, `login2.daic.tudelft.nl`, and `login3.daic.tudelft.nl`. You can connect to any of these nodes directly as per your needs.  For more on the choice of login nodes, see [DAIC login nodes](/docs/system#login-nodes).
+Currently DAIC has 3 login nodes: `login1.daic.tudelft.nl`, `login2.daic.tudelft.nl`, and `login3.daic.tudelft.nl`. You can connect to any of these nodes directly as per your needs.  For more on the choice of login nodes, see [DAIC login nodes](/docs/system/login-nodes/).
 {{% /alert %}}
 
 
diff --git a/content/en/docs/manual/job-submission/kerberos.md b/content/en/docs/manual/job-submission/kerberos.md
index 75db066..d8c5b21 100644
--- a/content/en/docs/manual/job-submission/kerberos.md
+++ b/content/en/docs/manual/job-submission/kerberos.md
@@ -10,7 +10,7 @@ description: >
 Kerberos is an authentication protocol which uses tickets to authenticate users (and computers). You automatically get a ticket when you log in with your password on a TU Delft installed computer. You can use this ticket to authenticate yourself without password when connecting to other computers or accessing your files. To protect you from misuse, the ticket expires after 10 hours or less (even when you're still logged in).
 
 ### File access
-Your Linux and Windows [Home](/docs/system#personal-storage-aka-home-folder) directories and the [Group](/docs/system#group-storage) and [Project](/docs/system#project-storage) shares are located on network fileservers, which allows you to access your files from all TU Delft installed computers. Kerberos authentication is used to enable access to, or protect, your files. Without a valid Kerberos ticket (e.g. when the ticket has expired) you will not be able to access your files but instead you will receive a `Permission denied` error.
+Your Linux and Windows [Home](/docs/system/storage/#personal-storage-aka-home-folder) directories and the [Group](/docs/system/storage/#group-storage) and [Project](/docs/system/storage/#project-storage) shares are located on network fileservers, which allows you to access your files from all TU Delft installed computers. Kerberos authentication is used to enable access to, or protect, your files. Without a valid Kerberos ticket (e.g. when the ticket has expired) you will not be able to access your files but instead you will receive a `Permission denied` error.
 
 ### Lifetime of Kerberos Tickets
 Kerberos tickets have a limited valid lifetime (of up to 10 hours) to reduce the risk of abuse, even when you stay logged in. If your tickets expire, you will receive a `Permission Denied` error when you try to access your files and a password prompt when you try to connect to another computer. When you want your program to be able to access your files for longer than the valid ticket lifetime, you'll have to renew your ticket (repeatedly) until your program is done. Kerberos tickets can be renewed up to a maximum renewable life period of 7 days (again to reduce the risk of abuse).
diff --git a/content/en/docs/manual/job-submission/priorities.md b/content/en/docs/manual/job-submission/priorities.md
index 148a00b..b07f484 100644
--- a/content/en/docs/manual/job-submission/priorities.md
+++ b/content/en/docs/manual/job-submission/priorities.md
@@ -19,13 +19,13 @@ When slurm is not configured for FIFO scheduling, jobs are prioritized in the fo
 
 In SLURM, a partition is a scheduling construct that groups nodes or resources based on certain characteristics or policies. Partitions are used to organize and manage resources within a cluster, and they allow system administrators to control how jobs are allocated and executed on different nodes. 
 
-To see all paritions on DAIC, you can use the command `scontrol show parition -a`. To check owners of these partitions, check the [Contributing departments](/docs/introduction/contributors-funders/#contributing-departments) page.
+To see all paritions on DAIC, you can use the command `scontrol show parition -a`. To check owners of these partitions, check the [Contributing departments](/docs/about/contributors-funders/#contributing-departments) page.
 
 ### Partitions & priority tiers
 DAIC partitions are tiered: 
 - The `general` partition is in the _lowest priority tier_, 
 - Department partitions (eg, `insy`, `st`) are in the _middle priority tier_, and 
-- Partitions for specific groups (eg, `influence`, `mmll`) are in the _highest priority tier_. Those partitions correspond to resources contributed by the respective groups or departments (see [Contributing departments](/docs/introduction)).
+- Partitions for specific groups (eg, `influence`, `mmll`) are in the _highest priority tier_. Those partitions correspond to resources contributed by the respective groups or departments (see [Contributing departments](/docs/about/contributors-funders/#contributing-departments)).
 
 When resources become available, the scheduler will first look for jobs in the highest priority partition that those resources are in, and start the highest (user) priority jobs that fit within the resources (if any). When resources remain, the scheduler will check the next lower priority tier, and so on. Finally, the scheduler will try to _backfill_ lower (user) priority jobs that fit (if any).
 
@@ -130,7 +130,7 @@ More details is available in [Slurm's SchedulerType](https://slurm.schedmd.com/s
   - QOS: the quality of service associated with the job, which is specified with the slurm `--qos` directive  (see [QoS priority](#qos-priority)).
 
 {{% alert title="Info" color="info" %}}
-The whole idea behind the FairShare scheduling in DAIC is to share all the available resources fairly and efficiently with all users (instead of having strict limitations in the amount of resource use or in which hardware users can compute). The resources in the cluster are contributed in different amounts by different groups (see [Contributing departments](/docs/introduction)), and the scheduler makes sure that each group can use a _share_ of the resource relative to what the group contributed. 
+The whole idea behind the FairShare scheduling in DAIC is to share all the available resources fairly and efficiently with all users (instead of having strict limitations in the amount of resource use or in which hardware users can compute). The resources in the cluster are contributed in different amounts by different groups (see [Contributing departments](/docs/about/contributors-funders/#contributing-departments)), and the scheduler makes sure that each group can use a _share_ of the resource relative to what the group contributed. 
 To check how the cluster is configured you may run:
 
 ```bash
@@ -182,7 +182,7 @@ PriorityWeightTRES      = (null)
 
 When you submit a job in a slurm-based system, it enters a queue waiting for resources.
 The _partition_ and _Quality of Service(QoS)_ are the two job parameters slurm uses to assign resources for a job:
-* The _partition_  is a set of compute nodes on which a job can be scheduled. In DAIC, the nodes contributed or funded by a certain group are lumped into a corresponding partition (see [Contributing departments](/docs/introduction#contributing-departments)). 
+* The _partition_  is a set of compute nodes on which a job can be scheduled. In DAIC, the nodes contributed or funded by a certain group are lumped into a corresponding partition (see [Contributing departments](/docs/about/contributors-funders/#contributing-departments)). 
 All nodes in DAIC are part of the `general` partition, but other partitions exist for prioritization purposes on select nodes (see [Priority tiers](/docs/manual/job-submission/priorities)).
 * The _Quality of Service_ is a set of limits that controls what resources a job can use and, therefore, determines the priority level of a job. This includes the run time, CPU, GPU and memory limits on the given partition. Jobs that exceed these limits are automatically terminated (see [QoS priority](/docs/manual/job-submission/priorities#qos-priority)).
 
@@ -405,7 +405,7 @@ Using reservations is in line with the [General cluster usage clauses](/docs/pol
 To request a reservation for nodes, please use to the [Request Reservation form](https://tudelft.topdesk.net/tas/public/ssp/content/detail/service?unid=c6d0e44564b946eaa049898ffd4e6938&from=d75e860b-7825-4711-8225-8754895b3507). You can request a reservation for an entire compute node (or a group of nodes)  **if you have contributed this (or these) nodes to the cluster and you have special needs that needs to be accommodated**.
 
 General guidelines for reservations' requests:
-* You can be granted a reservation *only* on nodes from a partition that is contributed by your group (See [Partitions](/docs/manual/job-submission/partitions) to check the name of the partition contributed by your group, and [System specifications](/docs/system/) for a listing of available nodes and their features).
+* You can be granted a reservation *only* on nodes from a partition that is contributed by your group (See [Computing nodes](/docs/system/compute-nodes) for a listing of available nodes, their features, and which paritions they belong to).
 * Please ask for the least amount of resources you need as to minimize impact on other users.
 * _Plan ahead and request your reservation as soon as possible_: Reservations usually ignore running jobs, so any running job on the machine(s) you request will continue to run when the reservation starts. While jobs from other users will not start on the reserved node(s), the resources in use by an already running job at the start time of the reservation will not be available in the reservation until this running job ends. The earlier ahead you request resources, the easier it is to allocate the requested resources.
 
diff --git a/content/en/docs/manual/job-submission/slurm-basics.md b/content/en/docs/manual/job-submission/slurm-basics.md
index 8e5890f..0f3a910 100644
--- a/content/en/docs/manual/job-submission/slurm-basics.md
+++ b/content/en/docs/manual/job-submission/slurm-basics.md
@@ -8,7 +8,7 @@ description: >
 
 ## Job script
 
-Job scripts are text files, where the header set of directives that specify compute resources, and the remainder is the code that needs to run. All resources and scheduling are specified in the header as `#SBATCH` directives (see `man sbatch` for more information). Code could be a set of steps to run in series, or parallel tasks within these steps (see [Slurm job's terminology](/docs/manual/job-submission)).
+Job scripts are text files, where the header set of directives that specify compute resources, and the remainder is the code that needs to run. All resources and scheduling are specified in the header as `#SBATCH` directives (see `man sbatch` for more information). Code could be a set of steps to run in series, or parallel tasks within these steps (see [Slurm job's terminology](/docs/manual/job-submission/slurm-basics)).
 
 The code snippet below is a template script that can be customized to run jobs on DAIC. 
 A useful tool that can be used to streamline the debugging of such scripts is {{< external-link "https://www.shellcheck.net/" "ShellCheck" >}}.
@@ -60,7 +60,7 @@ Submitted batch job 2
 
 ### Using GPU resources
 
-Some DAIC nodes have GPUs of different types, that can be used for various compute purposes (see [GPUs](/docs/system#gpus)).
+Some DAIC nodes have GPUs of different types, that can be used for various compute purposes (see [GPUs](/docs/system/compute-nodes/#gpus)).
 
 
 To request a gpu for a job, use the sbatch directive `--gres=gpu[:type][:number]`, where the optional `[:type]` and `[:number]` specify the type and number of the GPUs requested, as in the examples below:
@@ -178,7 +178,7 @@ SomeNetID@influ1:~$ exit
 
 ## Interactive jobs on compute nodes
 
-To work interactively on a node, e.g., to debug a running code, or test on a GPU, start an interactive session using `sinteractve <compute requirements>`. If no parameters were provided, the default are applied. `<compute requirement>` can be specified the same way as sbatch directives within an sbatch script (see [Submitting jobs](/docs/manual/job-submission/job-scripts)), as in the examples below:
+To work interactively on a node, e.g., to debug a running code, or test on a GPU, start an interactive session using `sinteractve <compute requirements>`. If no parameters were provided, the default are applied. `<compute requirement>` can be specified the same way as sbatch directives within an sbatch script (see [Submitting jobs](/docs/manual/job-submission/slurm-basics/#job-scripts)), as in the examples below:
 
 ```bash
 $ hostname # check you are in one of the login nodes
diff --git a/content/en/docs/manual/software/installing-software.md b/content/en/docs/manual/software/installing-software.md
index cba0159..3e301c4 100644
--- a/content/en/docs/manual/software/installing-software.md
+++ b/content/en/docs/manual/software/installing-software.md
@@ -8,11 +8,11 @@ description: >
 
 ## Basic principles
 
-- On a cluster, it's important that software is available and identical on all nodes, both _login_ and _compute_ nodes (see [Workload scheduler](/docs/system#workload-scheduler)). For self-installed software, it's easier to install the software in one shared location than installing and maintaining the same software separately on every single node. You should therefore install your software on one of the network shares (e.g., your `$HOME` folder or an `umbrella` or `bulk` folder) that are accessible from all nodes (see [Storage](/docs/system#storage)).
+- On a cluster, it's important that software is available and identical on all nodes, both _login_ and _compute_ nodes (see [Workload scheduler](/docs/system/scheduler/)). For self-installed software, it's easier to install the software in one shared location than installing and maintaining the same software separately on every single node. You should therefore install your software on one of the network shares (e.g., your `$HOME` folder or an `umbrella` or `bulk` folder) that are accessible from all nodes (see [Storage](/docs/system/storage)).
 
 - As a regular Linux user you don't have administrator rights. Yet, you can do your normal work, including installing software _in a personal folder_, without needing administrator rights. Consequently, you don't need (nor are you allowed) to use the `sudo` or `su` commands that are often shown in manuals. 
 
-- Like other clusters, DAIC has a set quota on `$HOME` directories (see [system specifications](/docs/system#storage) for current limits). It means that installing software in your `$HOME` directory is limited. If you need more space, you should use a project share (see [Storage](/docs/system#storage)).
+- Like other clusters, DAIC has a set quota on `$HOME` directories (see [Checking Quota Limits](/docs/system/storage/#checking-quota-limits)). It means that installing software in your `$HOME` directory is limited. If you need more space, you should use a project share (see [Storage](/docs/system/storage)).
 
 - Both group storage (under `/tudelft.net/staff-groups/` or `/tudelft.net/staff-bulk/`) and project storage (under `/tudelft.net/staff-umbrella/`) are Windows-based, leading to problems installing packages with tools like `pip` due to file permission errors. Therefore, the recommended way of using your own software and environments is to use containerization and to store your containers under `/tudelft.net/staff-umbrella/...`. Check out the [Apptainer tutorial](/tutorials/apptainer) for guidance.
 
@@ -259,7 +259,7 @@ $ find miniforge3 -type f | wc -l
 Now, you can install your own versions of libraries and programs, or create entire environments as descibed above.
 
 {{% alert title="Stop!" color="warning" %}}
-You are limited to a fixed quota in your `$HOME` directory (see [system specifications](/docs/system/#personal-storage-aka-home-folder)). Installing a full development environment (e.g. for PyTorch) can easily exceed this quota. Therefore, it is recommended to install only essential tools and libraries in your `$HOME` directory. For larger environments, consider installing them in a [project](/docs/system/#project-storage) (preferred) or [group](/docs/system/#group-storage) share.
+You are limited to a fixed quota in your `$HOME` directory (see [Personal Storage](/docs/system/storage/#personal-storage-aka-home-folder)). Installing a full development environment (e.g. for PyTorch) can easily exceed this quota. Therefore, it is recommended to install only essential tools and libraries in your `$HOME` directory. For larger environments, consider installing them in a [project](/docs/system/storage/#project-storage) (preferred) or [group](/docs/system/storage/#group-storage) share.
 {{% /alert %}}
 
 ## Using binaries
@@ -320,7 +320,7 @@ $ source ~/.bash_profile
 $ mkdir -p "$PREFIX"
 ```
 
-The line `export PREFIX="$HOME/.local"` sets your software installation directory to `/home/nfs/<YourNetID>/.local` (which is the default and accessible on all nodes). This is in your personal home directory, which has a fixed quota (see [system specifications](/docs/system/#personal-storage-aka-home-folder)). For software intended to be shared with others, you should instead use a [project](/docs/system/#project-storage) (preferred) or [group](/docs/system/#group-storage) share.
+The line `export PREFIX="$HOME/.local"` sets your software installation directory to `/home/nfs/<YourNetID>/.local` (which is the default and accessible on all nodes). This is in your personal home directory, which has a fixed quota (see [Personal storage](/docs/system/storage/#personal-storage-aka-home-folder)). For software intended to be shared with others, you should instead use a [project](/docs/system/storage/#project-storage) (preferred) or [group](/docs/system/storage/#group-storage) share.
 
 
 ```bash
diff --git a/content/en/docs/system/compute-nodes.md b/content/en/docs/system/compute-nodes.md
index 370aa22..ab2d63b 100644
--- a/content/en/docs/system/compute-nodes.md
+++ b/content/en/docs/system/compute-nodes.md
@@ -138,6 +138,13 @@ All compute nodes support [Advanced Vector Extensions](https://en.wikipedia.org/
 The following table gives an overview of current nodes and their characteristics. Use the search bar to filter by hostname, GPU type, or any other column, and select columns to be visible. 
 <!-- The "Controller" column refers to the onboard network controller. -->
 
+{{% alert title="Note" color="info" %}}
+Slurm partitions typically correspond to research groups or departments that have contributed compute resources to the cluster. Most partition names follow the format `<faculty>-<department>` or `<faculty>-<department>-<section>`. A few exceptions exist for project-specific nodes.
+
+For more information, see the [Partitions](/docs/manual/job-submission/priorities/#partitions) section.
+{{% /alert %}}
+
+
 <table id="nodes-table" class="display">
 <thead>
   <tr>
diff --git a/content/en/docs/system/storage.md b/content/en/docs/system/storage.md
index cdf6fed..f9e977d 100644
--- a/content/en/docs/system/storage.md
+++ b/content/en/docs/system/storage.md
@@ -7,7 +7,7 @@ description: >
 
 ## Storage
 {{% pageinfo %}}
-DAIC compute nodes have direct access to the TU Delft [home](#personal-storage-aka-home-folder), [group](#group-storage) and [project](#project-storage) storage. You can use your TU Delft installed machine or an SCP or SFTP client to transfer files to and from these storage areas and others (see [data transfer](/docs/manual/data-management/data-transfer/)) , as is demonstrated throughout this page.
+DAIC compute nodes have direct access to the TU Delft [home](#personal-storage-aka-home-folder), [group](#group-storage) and [project](#project-storage) storage. You can use your TU Delft installed machine or an SCP or SFTP client to transfer files to and from these storage areas and others (see [data transfer](/docs/manual/data-management/)) , as is demonstrated throughout this page.
 {{% /pageinfo %}}
 
 ### File System Overview
diff --git a/content/en/docs/system/tud-clusters.md b/content/en/docs/system/tud-clusters.md
index 54e7140..338dda3 100644
--- a/content/en/docs/system/tud-clusters.md
+++ b/content/en/docs/system/tud-clusters.md
@@ -27,7 +27,7 @@ DAIC is one of several clusters accessible to TU Delft CS researchers (and their
   </tr>
   <tr>
     <td>Contributors</td>
-    <td>Certain groups within TU Delft (see <a href="#contributing-departments">Contributing departments</a>)</td>
+    <td>Certain groups within TU Delft (see <a href="/docs/about/contributors-funders/#contributing-departments">Contributing departments</a>)</td>
     <td>All TU Delft faculties</td>
     <td>Multiple universities &amp; SURF</td>
   </tr>
@@ -51,7 +51,7 @@ DAIC is one of several clusters accessible to TU Delft CS researchers (and their
   </tr>
   <tr>
     <td>Website</td>
-    <td><a href="https://doc.daic.tudelft.nl/">DAIC documentation</a></td>
+    <td><a href="https://daic.tudelft.nl/">DAIC documentation</a></td>
     <td><a href="https://doc.dhpc.tudelft.nl/delftblue/">DelftBlue Documentation</a></td>
     <td><a href="https://asci.tudelft.nl/project-das/">DAS Documentation</a></td>
   </tr>
@@ -69,7 +69,7 @@ DAIC is one of several clusters accessible to TU Delft CS researchers (and their
   </tr>
   <tr>
     <td>Getting started</td>
-    <td><a href="/tutorials/quickstart/">Quickstart</a></td>
+    <td><a href="/quickstart/">Quickstart</a></td>
     <td><a href="https://doc.dhpc.tudelft.nl/delftblue/crash-course/">Crash course</a></td>
     <td></td>
   </tr>
@@ -92,7 +92,7 @@ DAIC is one of several clusters accessible to TU Delft CS researchers (and their
   </tr>
   <tr>
     <td>Data storage</td>
-    <td><a href="/docs/system#storage">Storage</a></td>
+    <td><a href="/docs/system/storage">Storage</a></td>
     <td><a href="https://doc.dhpc.tudelft.nl/delftblue/DHPC-hardware/#storage">Storage</a></td>
     <td>Storage: 128 TB (RAID6) </td>
   </tr>
diff --git a/content/en/quickstart/_index.md b/content/en/quickstart/_index.md
index c05c2cb..43e1c2e 100644
--- a/content/en/quickstart/_index.md
+++ b/content/en/quickstart/_index.md
@@ -59,9 +59,9 @@ flowchart TB
             E --> F --> H 
             
             click C "/docs/manual/software/" "Software setup"
-            click D "/docs/manual/data-management/data-transfer" "Data transfer methods"
-            click E "/docs/manual/job-submission/job-interactive" "Interactive jobs on compute nodes"
-            click F "/docs/manual/job-submission/job-scripts" "Job submission"
+            click D "/docs/manual/data-management/" "Data transfer methods"
+            click E "/docs/manual/job-submission/slurm-basics/#interactive-jobs-on-compute-nodes" "Interactive jobs on compute nodes"
+            click F "/docs/manual/job-submission/slurm-basics/#job-script" "Job submission"
             click H "/support/faqs/job-resources#how-do-i-clean-up-tmp-when-a-job-fails" "How do I clean up tmp?"
         end
         subgraph  local["Develop locally, then port code"]
-- 
GitLab