# jupyterhub **Repository Path**: aifaith/jupyterhub ## Basic Information - **Project Name**: jupyterhub - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 2 - **Created**: 2022-02-06 - **Last Updated**: 2025-02-09 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ``` sudo /opt/bin/add_student.py temp.txt ``` or 1. switch to grader account 2. ```sudo ./add_student.py temp.txt``` fix bugs for admin: python3 ~/opt/bin/add_nbuser.py ~/opt/userdata/lecturers.list admin c.Authenticator.admin_users = {'grader'} clear cache ```bash docker-compose build --no-cache && docker-compose up ``` cd /home ls #看到学生及老师用户文件夹 cat /etc/shadow #左边看到老师和学生用户 nbgrader db student add 1730026058 --last-name=11 --first-name=11 --email=111@mail.uic.edu.hk # restart docker ```bash docker-compose rm -f && docker volume rm jupyterhub_mydata && docker-compose up --build ``` # solution in Nbgrader config ``` c.ClearSolutions.code_stub = { "python": "# your code here\n" } ``` shutdown hub and turn it on again # Push docker Ubuntu16.04+、Debian8+、CentOS7 对于使用 systemd 的系统,请在 /etc/docker/daemon.json 中写入如下内容(如果文件不存在请新建该文件): {"registry-mirrors":["https://reg-mirror.qiniu.com/"]} 之后重新启动服务: $ sudo systemctl daemon-reload $ sudo systemctl restart docker # install docker and docker-compose 1. sudo apt install docker.io make -y 2. sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose sudo curl -L https://gitee.com/aifaith/images/raw/master/docker/docker-compose -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64 3. git clone https://gitee.com/asdczxwqedas/jupyterhub.git # grading https://nbgrader.readthedocs.io/en/stable/user_guide/creating_and_grading_assignments.html#autograde-assignments ``` su - grader nbgrader collect "ps1" nbgrader autograde "ps1" --force nbgrader generate_feedback "ps1" && nbgrader release_feedback "ps1" nbgrader export ``` su - grader nbgrader collect "Week1-Quiz0-1005" nbgrader autograde "Week1-Quiz0-1005" --force nbgrader generate_feedback "Week1-Quiz0-1005" nbgrader release_feedback "Week1-Quiz0-1005" # backup use root ## backup course ``` cp -r /home/grader/python2022 /mnt cp -r /mnt/python2022 /home/grader chown -R grader:users /home/grader/python2022 ``` ``` tar -czvf python2022.tar.gz /home/grader/python2022 tar -xzvf python2022.tar.gz ``` ## back exchange ``` cp -r /srv/nbgrader/exchange /mnt cp -r /mnt/exchange /srv/nbgrader chown -R grader:users /srv/nbgrader/exchange chmod 777 /srv/nbgrader/exchange ``` https://docs.oracle.com/cd/E24457_01/html/E21988/giprn.html Become a superuser (root) by typing: % su Password: root-password Create a file in a selected directory to add swap space by typing: dd if=/dev/zero of=/dir/myswapfile bs=1024 count=number_blocks_needed where dir is a directory in which you have permission to add swap space. The myswapfile is the name of the swap file you are creating. The number_blocks_needed is an amount of 1024-byte blocks you want to create. See the dd(1) man page for more information. Verify that the file was created by typing: ls -l /dir/myswapfile The new file appears in the directory. Initialize the new swap area by typing: mkswap /dir/myswapfile Se e the mkswap(8) man page for more detailed information. Run the swapon command to enable the new swap space for paging and swapping by typing the following: swapon -a /dir/myswapfile Verify that the extra swap space was added by typing: swapon -s The output shows the allocated swap space. 1. dd if=/dev/zero of=/swapfile bs=1024 count=10097152 2. chmod 600 /swapfile 3. mkswap /swapfile 4. swapon -a /swapfile 5. swapon -s 6. htop share docker image locally 1. docker save -o jupyter_grader xianzixiang/jupyter_grader:2.0.2 2. docker load -i install linux nvidia driver sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt update sudo apt install nvidia-driver-510 nvidia-dkms-510 nvidia-utils-510 docker cuda https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker ## Migrate https://stackoverflow.com/questions/28734086/how-to-move-docker-containers-between-different-hosts ``` 1. docker commit jupyterhub newhub 2. docker save -o newhub.tar newhub 3. docker load -i newhub.tar 4. docker create --name $CONTAINER [] newhub 5. docker start $CONTAINER docker run -itd -p 80:80 --name jupyterhub newhub ``` ## 持久化 1. 创建一个数据卷容器(可以使用任何基础镜像,例如 `ubuntu`)。 ``` docker create -v /data --name mydata ubuntu /bin/true ``` 上面的命令使用 `docker create` 创建一个名为 `mydata` 的容器,并在其中创建一个数据卷 `/data`,该容器使用 `ubuntu` 镜像,并运行一个 `true` 命令(即什么也不做)。 2. 通过 `--volumes-from` 参数在其他容器中挂载数据卷。 例如,使用以下命令启动一个 `nginx` 容器并挂载 `mydata` 容器中的数据卷 `/data`: ``` docker run -d --name mynginx --volumes-from mydata nginx ``` 这将为 `mynginx` 容器挂载 `mydata` 容器中的 `/data` 数据卷。 现在,`mydata` 容器中的数据卷 `/data` 已经被共享到了 `mynginx` 容器中,即使在 `mynginx` 容器被删除后,数据卷仍然可以保留在 `mydata` 容器中,实现了数据的持久化。 需要注意的是,在创建数据卷容器时,我们使用了 `/bin/true` 命令,这是因为数据卷容器只需要提供挂载点,并不需要执行任何操作。因此,我们可以在容器中运行一个始终保持运行的命令(如 `tail -f /dev/null` 或 `sleep 9999`),也可以像上面一样运行一个空命令。