For fresh install:

NOTE: on old system, get list of packages installed w/o version
numbers with
$ rpm -qa --queryformat='%{NAME}\n'|sort
After install, run same command on new system and diff to see what
optional packages need to be installed.
==========
Note on Dell workstations: 
The Fedora installer doesn't support RAID (Intel Rapid Response Technology) or Intel Optane Memory. 
In the BIOS Expand System Configuration and go to SATA Operation. The storage controller must be set to AHCI as noted at:
See https://dellwindowsreinstallationguide.com/fedora-34/
==========

Install updates:
dnf -y update

==========

If not done during installation, set fully qualified hostname in
/etc/hostname.

==========
For PDF's you might want to consider using Evince. Otherwise install extra dnf repositories, for Adobe Reader, which is possibly insecure. Before it was possible to install 
Adobe Reader using YUM/DNF, but currently there is no Adobe Reader in the 32-bit repo (no longer supported on Linux), so here are updated installation 
guide to get Adobe Reader working, see https://www.if-not-true-then-false.com/2010/install-adobe-acrobat-pdf-reader-on-fedora-centos-red-hat-rhel/
cd /tmp
wget http://ardownload.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/AdbeRdr9.5.5-1_i486linux_enu.rpm
rpm -Uvh --nodeps AdbeRdr9.5.5-1_i486linux_enu.rpm
wget -O /opt/Adobe/Reader9/Reader/intellinux/lib/libidn.so.11 https://www.if-not-true-then-false.com/dl/libidn.so.11.6.18
dnf install AdbeRdr9.5.5-1_i486linux_enu.rpm
dnf install libcanberra-gtk2.i686 adwaita-gtk2-theme.i686 PackageKit-gtk3-module.i686

Get rpmfusion repo rpms from
http://rpmfusion.org/Configuration
dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm 

For Google Chrome:
dnf install fedora-workstation-repositories
dnf config-manager --set-enabled google-chrome
dnf install google-chrome-stable
==========
Install Icingaweb2 client and the nagios plugins:
curl https://packages.icinga.com/fedora/ICINGA-release.repo -o /etc/yum.repos.d/ICINGA-release.repo
dnf install icinga2 nagios-plugins-all

Run the install wizard:
icinga2 node wizard

Enable and restart the service:
systemctl enable --now icinga2

==========
Here are the most common package requests received that that can be pared down.
dnf install logwatch autofs tcsh emacs yp-tools nfs-utils aspell-en gnome-tweak-tool enscript a2ps ddd cups gcc-c++ libreoffice opencv bwa vim dialog libnsl system-config-printer sipcalc python3-seaborn python3-lxml python3-basemap python3-scikit-image python3-scikit-learn python3-sympy python3-dask+dataframe python3-nltk valgrind python3-elpy htop iotop ncurses-devel conntrack-tools pdfmod pdfshuffler

May also need glibc.i686 (for 32-bit application compatibility)

Install icedtea-web for java plugin for browsers: dnf install icedtea-web

Install the following emacs-related RPMs (and their dependencies): 
    emacs-auctex     (this will install texlive for dependencies)
    emacs-auctex-doc 
    emacs-vm
    emacs-w3m
    emacs-el
    emacs-rinari (for Ruby on Rails)
Note if emacs opens with a small/short-window, you can compile emacs with GTK2. Here is a how-to, remember to replace your username.
dnf install rpmdevtools
Logged in as your username and do the folloing:
$ dnf download --source emacs
$ rpmdev-setuptree
$ rpm -ivh emacs-nn.rc1.fcnn.src.rpm (replace nn with the version you downloaded).

Edit ~/rpmbuild/SPECS/emacs.spec .  Change the %configure spec, replacing --with-x-toolkit=gtk3 and --with-xwidgets by
--with-x-toolkit=gtk2 and --without-xwidgets respectively.

Prepare:
$ rpmbuild -bp  ~/rpmbuild/SPECS/emacs.spec
(Probably not necessary, but it lets you  look around before compiling.)

Compile:
$ rpmbuild -bc ~/rpmbuild/SPECS/emacs.spec
This takes a few minutes.

As root:
cp ~your-username/rpmbuild/BUILD/emacs-nn/build-gtk/src/emacs /usr/local/bin/emacs-gtk2 (replace nn with the corresponding versi>

You may notice these errors in the /var/log/messages log file:
** (emacs:2176): CRITICAL **: murrine_style_draw_box: assertion 'height >= -1' failed

The fix is to edit one's theme gtk-2.0/gtkrc file, most likely in /usr/share/themes/BlueMenta/gtk-2.0/gtkrc. Change the following, changing 0 to 1:
GtkRange        ::trough-under-steppers         = 1

=========
Install Java JDK see https://fedoraproject.org/wiki/JDK_on_Fedora#Installing_Oracle_JDK_on_Fedora

==========

Install and start ypbind:

dnf install ypbind

Edit /etc/yp.conf:
domain divscimath server dsm.dsm.fordham.edu
domain divscimath server mandelbrot.dsm.fordham.edu

This is added for compatability with the network service, which has been replaced by NetworkManager Edit /etc/sysconfig/network, add a line:
NISDOMAIN=divscimath
# add to the end to a static port
YPSERV_ARGS="-p 944"
YPXFRD_ARGS="-p 945"
vi /etc/sysconfig/yppasswdd
# add below to set a static port
YPPASSWDD_ARGS="--port 946"

systemctl enable --now ypbind.service


Start of ypbind sometimes needs to wait for network to be up and running.
Arrange for that to happen with this:
systemctl enable --now NetworkManager-wait-online.service

Network Services Caching Daemon
dnf install nscd
systemctl enable --now nscd

systemctl restart rpcbind ypserv ypxfrd yppasswdd

==========

Enable and start autofs:

Edit /etc/nsswitch.conf to have these:
automount:	files nis
passwd:		files nis systemd
shadow:		files nis
group:		files nis systemd
hosts:		files nis mdns4_minimal [NOTFOUND=return] dns myhostname mymachines

systemctl enable --now autofs.service

==========

Set up symlink to snapshots root:
ln -s /local/.snapshots /snapshots

==========

Restore /etc/ssh and /root/.ssh/authorized_keys
(Find these in /snapshots.)

Make sure to set  Match Address 192.168.0.*,127.0.0.1,150.108.68.*,150.108.64.*,10.10.1.* in  /etc/ssh/sshd_config, to allow root logins only from Fordham CIS IPs

Note: if selinux is enabled, it is necesssary afterward to do
# chcon -t etc_t file
on each file in /etc/ssh and on /etc/ssh itself.

==========

Set up overnight dnf updates.  For example, use crontab -e to define

# snarf dnf updates every night at 1:35 am
35 01 * * * /usr/bin/dnf -y --skip-broken upgrade
# nuke dnf cache every month
35 00 1 * * /usr/bin/dnf clean all

Choose different times for different hosts to avoid network congestion.

NB you can restore from
/snapshots/daily.1/s/$HOST/var/spool/cron/root

==========

Enable logging.

dnf install rsyslog
systemctl enable --now rsyslog 

Disable useless audit log spam see https://unix.stackexchange.com/questions/241552/how-to-disable-useless-audit-success-log-entries-in-dmesg:
auditctl -e 0
auditctl -D
systemctl disable auditd

Adds first rule in /etc/rsyslog.conf:

#### RULES ####

# no audit
:programname, isequal, "audit" ~

edit /etc/default/grub and add "audit=0" to the end of that line
For legacy BIOS run grub2-mkconfig --output /boot/grub2/grub.cfg
For EFI run grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

==========

Set up cross-campus drobo backups.  See drobo-howto.txt for details.

dnf install autofs cifs-utils samba-client

Create credentials files /etc/auto.smb.drobo-rh, drobo-lc (can copy
from any host that has one).  Make sure permission is 400 or 600 root.

Copy /etc/auto.cifs from a host that has it or from snapshot.

On all hosts except dsm:
ln -s /local/dsm/sbin/drobo-backup /usr/local/sbin
On dsm
cp ~moniot/src/sysadmin/drobo-backup.pl /usr/local/sbin/drobo-backup

Configure /etc/drobo-backup.conf or restore from a snapshot.

==========

Restore /etc/exports from snapshot.  Enable nfs server using

systemctl enable nfs-server.service

NOTE: non-existent mount points (e.g. /mnt/cdrom) are no longer
tolerated in /etc/exports.

==========

Install Zoom for Linux:
Visit https://zoom.us/download?os=linux, in the drop down choose Fedpra 64-bit. Click Download, and upload to the workstation. 
Run dnf localinstall zoom_x86_64.rpm
=======
Systems on UPS need:
dnf -y install apcupsd apcupsd-cgi apcupsd-gui

Edit /etc/apcupsd/apcupsd.conf appropriately.  Copies of same for the
different hosts are in ~moniot/doc/sysadmin.

For JMH301A there are 3 APC Smart 750 UPS running connected via USB on the trill server.
Open up a browser on trill and visit https://www.cis.fordham.edu/apcupsd/multimon.cgi
There are 3 processes running to allow viewing of multiple UPS'es:
systemctl status apcupsd_hiddev0.service, matches the UPS with the serial number ending in 0093
systemctl status apcupsd_hiddev1.service, matches the UPS with the serial number ending in 2899
systemctl status apcupsd_hiddev2.service, matches the UPS with the serial number ending in 3699
Make sure after a reboot these services start. I used https://wiki.debian.org/apcupsd#Configuring_.28Multiple_UPS_Devices.29 as a guideline.

==========

If local account differs from nis account (fred, nissim), make local
uid in /etc/passwd match the one in nis.

==========

Uncomment line ``#Method = nsswitch'' in /etc/idmapd.conf.  This is
needed for correct ownership of files in NFS mounted directories.

==========

For postgresql servers (dsm and erdos): 
[the following is out of date: see postgresql setup instructions]
  # service postgresql initdb
    -- set up pg_hba.conf with trust auth
  # service postgresql start
  [postgres ~] $ psql -f /var/lib/pgsql/backups/pg_dumpall.sql
  Then configure it for normal operation.
  On dsm: as postgres in ~/data:
    $ co -unormal pg_hba.conf 
    The relevant lines are:
local  all    postgres                                          ident pgmap
local  all      all                                             md5
host   all      all         127.0.0.1         255.255.255.255   md5

    Also need the following in pg_ident.conf
pgmap  postgres   postgres

  On erdos: we use ident authentication, pg_hba.conf should have:
local   all         all                               ident sameuser
host    all         all         127.0.0.1/32          ident sameuser
host    all         all         150.108.64.1/24       trust
host    all         all         ::1/128               ident sameuser
  and pg_ident.conf is default.

==========

For dsm and erdos: install mysql.


==========

For dsm:
 Set up NIS service
dnf install ypserv

systemctl enable yppasswdd
systemctl start yppasswdd
systemctl enable ypfxrd
systemctl start ypfxrd

Restore /var/yp/Makefile,ypservers from snapshots.
Punch some holes in firewalld for the following services:
firewall-cmd --add-service=rpc-bind --permanent 
firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --add-port=944/tcp --permanent 
firewall-cmd --add-port=944/udp --permanent 
firewall-cmd --add-port=945/tcp --permanent 
firewall-cmd --add-port=945/udp --permanent 
firewall-cmd --add-port=946/udp --permanent 

For sendmail:
firewall-cmd --permanent --add-service=smtp

on dsm, since NFS uses random UDP ports, until we can find which ones/where to configure them,  you have to add IP's of servers/workstations to the "trusted zone":
firewall-cmd --permanent --zone=trusted --add-source=150.108.64.64
firewall-cmd --permanent --zone=trusted --add-source=150.108.64.65
firewall-cmd --reload

also on erdos in order to add the JMH328C computers I had to add their IP addresses, i.e.,
firewall-cmd --permanent --zone=trusted --add-source=150.108.68.10[6-10]

To debug firewalld issues, e.g., to see what port and protocal is being requested:
firewall-cmd --set-log-denied=all
And when done debugging:
firewall-cmd --set-log-denied=off

==========

For dsm and kopernik, install squirrelmail and dovecot. Note that Squirrelmail can be behind in getting RPMs for current versions of Fedora.
dnf install squirrelmail
dnf install dovecot
Copy dovecot.conf file over and verify the path to the LetsEncrypt SSL private and public keys.
Copy favicon.ico as it will cause 404 errors causing false positives with Fail2ban:
cp /var/www/html/favicon.ico /usr/share/squirrelmail
To increase the default attachment size see http://squirrelmail.org/wiki/index.php?page=AttachmentSiz:
To upload large files, post_max_size must be larger than upload_max_filesize. If memory limit is enabled by configure script, memory_limit also affects file uploading. 
Generally speaking, memory_limit should be larger than post_max_size. Restart the php-fpm service after making changes to the /etc/php.ini file

Note that following ports in firewalld may need to be opened, and you might want to not open port 80 and/or 110 for POP3. Also confirm the zone.:
firewall-cmd --zone=public --add-port=25/tcp --permanent  #SMTP
firewall-cmd --zone=public --add-port=80/tcp --permanent  #HTTP
firewall-cmd --zone=public --add-port=443/tcp --permanent #HTTPS
firewall-cmd --zone=public --add-port=143/tcp --permanent #IMAP
firewall-cmd --zone=public --add-port=993/tcp --permanent #IMAPS
firewall-cmd --zone=public --add-port=587/tcp --permanent #Submission
firewall-cmd --zone=public --add-port=465/tcp --permanent #SMTPS
==========

For dsm, restore /etc/hosts, passwd, group, /usr/local/adm from snapshots
N.B. can't use shadow with openwebmail
Set up httpd service.
Restore /var/www, including /var/www/cgi-bin/cryptpass.pl and /var/www/html/js/cryptpass.js
Need to install packages mod_auth_kerb mod_auth_mysql mod_auth_pgsql 
Restore /etc/auto.master, auto.local, auto.home
Copy files from  /usr/local/sbin and /usr/local/share/dict
Copy (tar) mailman from /var/lib/mailman/
Run systemctl start mailman and systemctl enable mailman
Set RedirectMatch ^/mailman[/]*$ http://www.dsm.fordham.edu/mailman/listinfo in /etc/httpd/conf.d/mailman.conf, systemctl restart mailman

Install greylisting, SpamAssassin/spamass-milter and clamav/clamav-milter for sendmail
Download the KAM.cf and non-KAM.cf files from https://www.pccc.com/downloads/SpamAssassin/contrib/KAM.cf and nonKAMrules.cf from https://www.pccc.com/downloads/SpamAssassin/contrib/nonKAMrules into /etc/mail/spamassassin
Install the unbound DNS caching server to avoid URIBL_BLOCKED (see http://uribl.com/about.shtml) message in /var/log/maillog: 
dnf install unbound
TROUBLESHOOTING unbound:
-make sure no other DNS resolvers like DNSMASQ are running: netstat -nltup | grep 'Proto\|:53 \|:67 \|:80 \|:471 \|:5353'
-If libvirt (a toolkit to manage virtualization platforms) is running, it starts DNSMASQ. The following 2 commands will disable libvirt networking and prevent it starting on reboot (you may be promopted to install gnutls.x86_64):
virsh net-destroy default
virsh net-autostart --network default --disable
Run this command to make sure unbound is working correctly for Spamassassin and URIBL: 
dig test.uribl.com.multi.uribl.com txt +short
should respond with "permanent testpoint"

Copy the /etc/mail/spamassassin/local.cf file from a backup or make sure the following settings are in place:
dns_server 127.0.0.1
trusted_networks 150.108.68/24 150.108.64/24
internal_networks 150.108.68.26
dns_available yes

A good install guide for ClamAV is https://linux-audit.com/install-clamav-on-centos-7-using-freshclam/#comment-159087
Note for ClamAV using the TCPSocket/TCPAddr option in /etc/clamd.conf appears easier to configure.
Make sure ClamdSocket in /etc/mail/clamav-milter.conf matches TCPSocket/TCPAddr in /etc/clamd.conf.
Whitelist the logwatch reports, e.g., copy from a backup /etc/mail/clamav-milter-whitelist.conf and make sure the path is correct in /etc/mail/clamav-milter.conf
Install the third party signatures following the instructions at https://github.com/extremeshok/clamav-unofficial-sigs/blob/master/INSTALL.md

On storm restore  /etc/httpd/conf.d/CGIaliases.conf which allows a directory to be browser without the tilde (~).
Restore files just as /etc/profile.d/systemctl.sh and any others in /etc/profile.d.

Customize /etc/skel
Install expect (for mkpasswd)
Set up sudoers to allow staff to create accounts:
## Allow members of the staff group to generate account cards
%staff dsm=/usr/local/sbin/gen-account-cards
## Allow members of the staff group to create accounts
%staff dsm=/usr/local/sbin/create-accounts

Copy all of the files & directories associated with create-accounts. Note that the files in /usr/local/bin, such as create-accounts and gen-account-cards, are wrapper scripts used to call the actual script in /usr/local/sbin, to allow sudo privileges for users in the 'staff' group.
/usr/local/bin/create-accounts
/usr/local/sbin/create-accounts
/usr/local/sbin/accountpostcreate.sh
/usr/local/sbin/gen-account-cards
/usr/local/sbin/facl-setup.sh
/usr/local/sbin/delete-accounts 
/usr/local/sbin/idle-accounts.pl 
/usr/local/sbin/pwgen
/usr/local/adm/*
on storm, there is a GUI to change passwords: /usr/local/mydev/bscripts/chpwd/*

Note that create-accounts has many checks for proper syntax in an input file, including removing control characters, e.g., ^M, that can from Microsoft documents. It's not foolproof. 
Also note that /usr/local/bin/create-accounts is a wrapper script to /usr/local/sbin/create-accounts and gen-account-cards can be used as a password reset script which will print an account card. Remmeber to use the -P- option to write to a local file.

In order to disable idle user accounts, e.g., have not logged in in 18 months, Dr. Moniot created and maintains two scripts, one called  idle-accounts, which is used to print out a list of user accounts that have not been logged into in a specified amount of days, and another called delete-accounts. A sample command to check if a user has not logged in within 2 years, i.e., 720 days looks like this:
idle-accounts -mtime 720
For the delete-accounts script, take note of the environment variable DEADUSERDIR. We used to make /scratch/dead_users on dsm be a symlink to /scratch/backups/.snapshots/dead_users on mandelbrot. dsm has enough space to handle the archiving. This also has to change for storm, which does not use NIS, and the last 3 lines that reference NIS maps needs to be commented or removed. The delete-accounts can be run with a list of users from the idle-accounts script either from standard input or from having the usernames separate by a space, for example:
delete-accounts user1 user2
To restore users that were archived, you will have to recreate the user account and then restore the archived files. Since the backed up files and directories are saved using the full path, it's easiest to run the tar command from the / directory, for example: 
cd /
tar -xzvf /scratch/dead_users/archived-user.tgz .

Install certbot for the Let's Encrypt SSL certificate and copy the entry for cron to renew the certs.
dnf install certbot
==========

For erdos with ext4 file system:
  in fstab, turn on quotas and ACLs:
   /home/users             ext3    exec,nosuid,rw,usrquota,acl 1 2
  then run quotacheck to set up the quota files:
 # quotacheck -c /home/users
  Add dsm's root public key to ~root/.ssh/authorized_keys so that
    edquota can be run remotely from dsm.
  Restore /etc/hangman.conf with line HANGMAN_PORT=9999

For erdos with xfs file system:
in /etc/fstab add quota option:
/dev/mapper/fedora_newerdos-home /home                   xfs  defaults,gquota    1 2  
Run: 
xfs_quota -xc 'limit -g bsoft=900m bhard=990m students' /home
xfs_quota -xc 'report' /home/
Group quota on /home (/dev/mapper/fedora_newerdos-home)
                               Blocks                     
Group ID         Used       Soft       Hard    Warn/Grace     
---------- -------------------------------------------------- 
root                0          0          0     00 [--------]
students            0     921600    1013760     00 [--------]
localguy           20          0          0     00 [--------]


==========

For erdos and the Bioinformatics course install Blast via dnf and use the latest rpm at ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/. Also install libidn, libidn-devel and bwa. Download a zip file of GATK (Genome Analysis Toolkit) at https://github.com/broadinstitute/gatk/releases and place it in a directory like /usr/local/bin and create a sym link for gatk.


=========
For erdos & storm:
Optional to provide web remote desktop with X2Go:
dnf install x2goserver
systemctl enable x2gocleansessions.service --now
You can choose to install the XFCE or MATE desktop
dnf install @xfce-desktop-environment
or
dnf groupinstall -y "MATE Desktop"
As of late 2021, XFCE has a bug with compositing which needs to be disabled, see https://gitlab.xfce.org/xfce/xfwm4/-/issues/551#note_32708:
Create a file:
/etc/x2go/xinitrc.d/xfwm4_no_compositing
containing:
/usr/bin/xfconf-query -c xfwm4 -p /general/use_compositing -s false
chmod 755 /etc/x2go/xinitrc.d/xfwm4_no_compositing

Python 3 
1. Since Python 2 is deprecated, Python 3 should be the default and comes with Fedora.
2. Optional: Download the latest Anaconda Python installer from https://www.anaconda.com/distribution/#linux
3. install it by running ./Anaconda3-yyyy.mm-Linux-x86_64.sh. This includes the pandas, numpy, scipy, and matplotlib libraries. Replace YYYY and MM accordingly.
4. Install the following modules/libraries, e.g., pip3 install pandas keras nibabel tensorflow tensorflow-gpu gensim imbalanced-learn mlxtend xgboost seaborn nltk graph-tools graphviz opencv-python mysql-connector-python-rf flask nilearn nibabel xlrd pymc3 bs4 filterpy altair scipy numpy scikit-learn matplotlib tqdm psutil wrapt gensim ipython tornado jinja2 wheel fasttext geopandas statsmodels altair
5. If the device has a GPU, add tensorflow-gpu
6. Add the path to Anaconda Python 3 in a .sh file in /etc/profile.d, e.g., on storm see /etc/profile.d/python3.sh


==========
If httpd not installed:
mkdir /var/www 

(For the sake of uniformity, rsnapshot on mandelbrot is configured to
back up this directory on all hosts, and it will report an error
message if the directory does not exist.)

==========

Remove /var/spool/mail and symlink to /local/mail
   Make sure automount is enabled and working right.  (Note autofs is
   not installed by default.  It may get disabled in upgrade.)

Configure sendmail as follows.
1. dnf install sendmail sendmail-cf procmail
2. Edit /etc/mail/sendmail.mc:
Add the following real time black lists:
FEATURE(`dnsbl', `b.barracudacentral.org', `', `"550 Mail from " $&{client_addr} " refused. Rejected for bad WHOIS info on IP of your SMTP server " in http://www.barracudacentral.org/lookups "')dnl
FEATURE(`dnsbl',`zen.spamhaus.org')dnl
FEATURE(`enhdnsbl', `bl.spamcop.net', `"Spam blocked see: http://spamcop.net/bl.shtml?"$&{client_addr}', `t')dnl
FEATURE(`enhdnsbl', `z.mailspike.net', `"550 Rejected: IP found in mailspike RBL"')dnl 
FEATURE(`enhdnsbl', `dnsbl.inps.de', `"550 Rejected: IP found in inps.de RBL"')dnl

For ClamAV:
INPUT_MAIL_FILTER(`clamav-milter', `S=inet:6666@127.0.0.1, F=, T=S:4m;R:4m')dnl

For SpamAssassin:
INPUT_MAIL_FILTER(`spamassassin', `S=unix:/var/run/spamass-milter/spamass-milter.sock, F=, T=C:15m;S:4m;R:4m;E:10m')dnl


    Comment out (by prefixing with dnl)
      FEATURE(always_add_domain)dnl
    i.e. by changing it to read
      dnl FEATURE(always_add_domain)dnl

    Comment out
      DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl

    Comment out
      FEATURE(`accept_unresolvable_domains')dnl

    Comment out
       LOCAL_DOMAIN(`localhost.localdomain')dnl

    Change
       dnl MASQUERADE_AS(`mydomain.com')dnl
    to
       MASQUERADE_AS(`dsm.fordham.edu')dnl

    Uncomment
       dnl FEATURE(masquerade_envelope)dnl
    after which, insert the following:
dnl #
dnl # Added by agw (29 Aug 2013) so that recipient addresses are masqueraded
dnl #
FEATURE(`allmasquerade') dnl

3. In /etc/mail directory, do "make".

4. systemctl (re)start sendmail

==========

Arrange for root's mail to be sent to an unprivileged account for
reading.

1. Add to /etc/aliases:

# Person who should get root's mail
root:		unclroot

2. Run newaliases

==========

[Optional] in /etc/gdm/custom.conf under [greeter]:
IncludeAll=false

==========

For mandelbrot after fresh install, set up matlab license manager.
Mandelbrot acts as license manager for the lab machines in LL612.
The setup is a network installation.
http://www.mathworks.com/help/install/network-license-administration.html

Check the licensing manager start up script on reboot in /etc/systemd/system/lmgrd.service

systemctl enable yppasswdd
systemctl start yppasswdd
systemctl enable ypfxrd
systemctl start yppasswdd

==========

On non-dsm hosts, set MAIL=/root/mail for root user.  Put the
following into /etc/profile.d/mailcheck.sh:

    LOGNAME=$USER
# Mail spool is nfs mounted; don't want local root logins to hang
# if nfs server is down.  For some reason setting MAILCHECK=-1
# doesn't suppress initial mailcheck, so set MAIL to a local location. -rkm
      if [ `id -u` = "0" ]; then
        MAIL=/root/mail
      else
        MAIL="/var/spool/mail/$USER"
      fi

==========

Configure printers:
systemctl enable cups systemctl start cups
-For HP desktop printers install these 2 packages: dnf install hplip hplip-gui
-make sure to add the printer in Make and Model field if uses system-config-printer, "raw data" will not interpret Unix carriage returns
-see https://superuser.com/a/280400/691755, change ErrorPolicy retry-job in /etc/cups/printers.conf and set JobRetryInterval 3 JobRetryLimit 3 in /etc/cups/cupsd.conf
10.224.88.206   ps610-c.dsm.fordham.edu ps610-c # HP color laser in 610 hallway had to use Fordham IP not CIS
150.108.64.132  ps612.dsm.fordham.edu ps612      #printer in back corner
150.108.64.98   ps813.dsm.fordham.edu ps813-s.dsm.fordham.edu ps813 ps813-s     # HP laserjet in 813 in front of Eliane, only in emergency
#150.108.64.97   ps601.dsm.fordham.edu ps812     # HP laserjet in 601, USB as there’s no free Ethernet port
#150.108.64.95  ps824-c.dsm.fordham.edu ps824-c # HP color laser in 824 Mary Hamilton's only print if emergency

To add Canon Uniflow printers/copiers to Fedora, see https://www.howtoforge.com/how-to-install-a-canon-printer-on-debian-and-debian-like-systems for tips.
For the 3550i on the 6th floor, download the Linux driver from Canon's Australia web site, as the USA web site (as of late 2019) did not have updated Linux drivers: https://www.canon.com.au/support/sims-content?pid=4968a1e11beb43d28a163e77b4dedc66&cid=DA56D1C9FDC240B0844D6A2E7524C0CF&ctype=dr
Run tar -xvf (as of 2019 the filename was linux-UFRII-drv-v500-uken-06.tar.gz,, cd into the direxctory and run the install.sh script, and say yes to all prompts.
Run the following command remembering to change the username to your Fordham Access ID login name, i.e., the left side of @fordham.edu:
/usr/sbin/lpadmin -p FordhamSecurePrint -E -v lpd://YOUR-USER-NAME@enterpriseprint.it.fordham.edu/fordhamsecureprint -D "FordhamSecurePrint"
Run system-config-printer and in the Make and Model field click Change, and find and select "Canon iR-ADV C3530 C3525/3530 III UFR II [en]". This might change if/when the copier is replaced. ========== Create symlinks /usr/local.dsm et al, e.g. /usr/local.dsm -> /local/dsm/ ========== Install iptables-setup from ~moniot/src/sysadmin/scripts/ into /usr/local/sbin. ========== For upgrade or install: Turn off selinux: edit /etc/selinux/config: SELINUX=disabled For immediate effect use setenforce 0 ========== Set up symlinks to custom texlive: ln -s /usr/local.dsm/tex /usr/local/tex ln -s /usr/local/tex/${RELEASE}/bin/{$ARCH}-linux/* /usr/local/bin where RELEASE is the latest version and ARCH is i386 or x86_64. For dsm: ln -s /usr/local.dsm/texlive/2007 /usr/local.dsm/tex ========== If latex2html is installed, fix /usr/share/latex2html/l2hconf.pm, removing -Ppdf flag from this line, so it looks like this: $DVIPSOPT = ' -E'; ========== to disable the user list from appearing in the GDM login screen create a file named /etc/dconf/profile/gdm and add the following lines: user-db:user system-db:gdm file-db:/usr/share/gdm/greeter-dconf-defaults gdm is the name of a dconf database. Create a gdm keyfile for machine-wide settings in /etc/dconf/db/gdm.d/00-login-screen with the following lines: [org/gnome/login-screen] # Do not show the user list disable-user-list=true Update the system databases by running: dconf update ======== Install Weka dnf install weka -Create a desktop shortcut in /usr/share/local/applications, along with an icon. A sample weka.desktop file looks like this, just match the paths to Weka and the Icon: cat /usr/share/applications/weka.desktop [Desktop Entry] Type=Application Encoding=UTF-8 Name=Weka Comment=Weka Application Exec=java -jar /usr/local/bin/weka-3-8-2/weka.jar Icon=/usr/local/bin/weka-3-8-2/weka.ico Terminal=false ========== For any computers with access from outside Fordham's network install fail2ban: dnf install fail2ban copy the /etc/fail2ban/jail.local file from a working computer systemctl enable fail2ban systemctl start fail2ban From storm copy this script: /usr/local/sbin/abuseipdb-to-hosts-deny.sh As of Sept. 2021, storm is used to scp this script, via crontab, to all servers that are accessible outside Fordham. Install a script called sync-blocklist from https://gist.github.com/klepsydra/ecf975984b32b1c8291a#gistcomment-2038935 that blocks IPs reported to the well-known blocklist.de reporting service. It runs via /etc/cron.daily =========== Install mongodb Community Edition, check for latest version: cat << EOL > /etc/yum.repos.d/mongodb-org-3.4.repo [mongodb-org-3.4] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.4/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc EOL dnf install -y mongodb-org systemctl enable mongod.service systemctl start mongod.service systemctl status mongod.service As of March 2018 there are errors on startup such as Failed to start High-performance, schema-free document-oriented database. Try the following commands: mkdir -p /var/run/mongodb/ ;chown -R mongod:mongod /var/run/mongodb/ Also check permissions in /var/lib/mongo, owner & group should be mongod:mongod. You may need to do delete a lock file if someone had installed another version or via a tar package: sudo rm /tmp/mongodb-*.sock ========== To remove Qualys entries from root's history command add this to ~/.bashrc: export HISTIGNORE=*QUALYS*:*ORIG_PATH*:echo\ *TEST* To increase the history command's saved number of commands add this to ~/.bashrc: HISTSIZE=2000 HISTFILESIZE=3000 =========== To get Dell EMC OpenManage Server Administrator (OMSA) working on Fedora see https://www.dell.com/community/Dell-OpenManage-Essentials/OMSA-9-4-dependencies-in-Fedora-32-Job-for-instsvcdrv-service/m-p/7696202#M16167 Dell does NOT officially support Fedora. -Download the os-dependent RPMS from http://linux.dell.com/repo/hardware/dsu/os_dependent/, as of this writing, it's Red Hat 8 http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL8_64/srvadmin/ -Several dependencies may be missing, install the following: openwsman-server libxml2 net-snmp -Search for the current RPM of libwsman1 changing xx to the respective version of Fedora https://rpmfind.net/linux/fedora/linux/releases/xx/Everything/x86_64/os/Packages/l/ -If SELinux is disabled skip srvadmin-selinux* -Confirm these symbolic links are in place, noting the versions will be different: lrwxrwxrwx 1 root root 13 Dec 24 2019 /opt/dell/srvadmin/lib64/libomacs.so -> libomacs.so.1 lrwxrwxrwx 1 root root 21 Dec 24 2019 /opt/dell/srvadmin/lib64/libomacs.so.1 -> libomacs.so.1.100.147 -rw-r--r-- 1 root root 2521032 Dec 24 2019 /opt/dell/srvadmin/lib64/libomacs.so.1.100.147 -Start OMSA with /opt/dell/srvadmin/sbin/srvadmin-services.sh start As long as systemctl status shows running on these 3: dsm_sa_eventmgrd.service dsm_sa_snmpd.service dsm_om_connsvc.service systemctl status instsvcdrv.service will show failed with: instsvcdrv.service: Control process exited, code=exited, status=155/n/a -Wait a few minutes and open a browser on the server and visit https://127.0.0.1:1311 ============== To configure a VessRAID to a server/host: Connect an Ethernet cable from a free NIC port on the server to one of the ports on the VessRAID. For this tutorial use port 2. Set a static IP of the server NIC to 10.0.10.5 (making sure to not duplicate the IP of the port in the VessRAID) Install iscsid systemctl enable/install iscsid In /etc/iscsi/initiatorname.iscsi For LC/olderdos: InitiatorName=iqn.1994-12.com.promise.c1.8c.65.55.1.0.0.20 For RH/tartarus: InitiatorName=iqn.1994-12.com.promise.c9.f0.57.55.1.0.0.20 Run the following 3 commands: iscsiadm -m discovery -t sendtargets -p 10.0.10.2 iscsiadm --mode node --targetname iqn.199412.com.promise.c1.8c.65.55.1.0.0.20 iscsiadm --mode node --targetname iqn.1994-12.com.promise.c1.8c.65.55.1.0.0.20 portal 10.0.10.1:3260 --login To confirm it's working run: iscsiadm -m session -P 0 =========================================================== Two issues to watch for with nightly dnf updates on dsm, for example, using Nik's stock demo at:https://dsm.dsm.fordham.edu/~cs1600demolc/ When Apache, i.e., httpd, is updated, it inserts a new version of a binary called suexec. The minimum UID and GID allowed for users by suEXEC are now 1000 and 500, respectively (previously 100 and 100). The group students has a GID of 200. This will cause the below error: Sep 23 11:04:46 dsm.dsm.fordham.edu suexec[1500165]: uid: (7522/cs1600demolc) gid: (200/students) cmd: insertSymbols.cgi Sep 23 11:04:46 dsm.dsm.fordham.edu suexec[1500165]: cannot run as forbidden gid (200/insertSymbols.cgi) So the first thing to check would be a command like this: journalctl -b | grep --color=auto "suexec". A long term fix would be to change the group ID of students. Additionally, when Python is updated, all of the pip modules/libraries will need to be added. In the case of this stock demo the MySQL connector is needed. The lApache SSL ogs will have an error like this: AH01215: stderr from /u/erdos/students/cs1600demolc/public_html/cgi-bin/insertSymbols.cgi: ModuleNotFoundError: No module named 'mysql', referer: https://dsm.dsm.fordham.edu/~cs1600demolc/ Running  pip3 install mysql-connector-python will fix that. ------------------------------------------------------------ Custom SLURM Deployment for Bright Cluster Background: Bright Cluster currently does not offer Nvidia NVML support for SLURM packages so custom packages have to be created. If being done from scratch the following dependencies are needed: Install Slurm dependencies yum install -y python36 gtk2-devel lua-devel cm-ucx readline-devel json-c-devel pam-devel pmix-devel cm-pmix3 Install Cuda dependencies: yum install -y cuda10.2-toolkit Add NVidia libraries to LD config. This step will also need to be done in the software image where slurm is deployed, as Slurm doesn't seem to check the LD_LIBRARY Path. cat /etc/ld.so.conf.d/cuda.conf /cm/shared/apps/cuda10.2/toolkit/current/lib64 /cm/local/apps/cuda-driver/libs/current/lib64 ldconfig -v | grep nv Create RPMS NOTE: The following instructions use Slurm version 20.02.6. The 20 corresponds to the year it was released, 2020, and 02 would be the month, February. The .6 would be the 6th sub-release within 20.02. Download the Bright Cluster files needed for SLURM: mkdir /root/build cd /root/build wget http://support2.brightcomputing.com/simon/SR-31534/slurm-buildfiles-bright.tar.gz tar xvzf slurm-buildfiles-bright.tar.gz Edit the file prepare.sh, to retrieve the required Slurm version. Around line 75: wget -N http://dev/src/${NAME}/${NAME}-${VERSION}.tar.bz2 Change to: wget https://download.schedmd.com/slurm/slurm-20.02.6.tar.bz2 Edit slurm20.spec file and define the Bright Cluster release version and commit. There are several places as outlined below. Change: %define cmrelease %(bc-branch-dependency-name) %define release %(bc -revision-number)_%(bc-release-tag) To (note this conincides with the Bright Cluster version, 8.2 as of Feb 2021): %define cmrelease 8.2 %define release mybuild Add an additional configuration setting to slurm20.spec, around line 368 add "--with-nvml=/cm/local/apps/cuda-driver/libs/current" Note that the leading character is a tab, else rpmbuild fails: --with-hwloc=/cm/shared/apps /hwloc/current \ --with-nvml=/cm/local/apps/cuda-driver/libs/current \ It should end up looking like this, changing from: --with-hwloc=/cm/shared/apps/hwloc/current \ %if %{rhel7_based} || %{rhel8_based} To: --with-hwloc=/cm/shared/apps/hwloc/current \ --with-nvml=/cm/local/apps/cuda-driver/libs/current \ %if %{rhel7_based} || %{rhel8_based} Also in slurm20.spec, adjust the version number From: Version: 20.02.2 To: Version 20.02.6 Additional around line 379 in slurm20.spec add the following stanza, right after %install before the mkdir directories %install # Strip out some dependencies cat > find-requires.sh <<'EOF' exec %{__find_requires} "$@" | egrep -v '^libpmix.so|libevent|libnvidia-ml' EOF chmod +x find-requires.sh %global _use_internal_dependency_generator 0 %global __find_requires %{_builddir}/%{buildsubdir}/find-requires.sh rm -rf %{buildroot} make install DESTDIR=%{buildroot} make install-contrib DESTDIR=%{buildroot} Comment out lines containing power and unit: #install -D -m644 etc/layouts.d.power.conf.example %{buildroot}/%{_sysconfdir}/layouts.d/power.conf.example #install -D -m644 etc/layouts.d.power_cpufreq.conf.example %{buildroot}/%{_sysconfdir}/layouts.d/power_cpufreq.conf.example #install -D -m644 etc/layouts.d.unit.conf.example %{buildroot}/%{_sysconfdir}/layouts.d/unit.conf.example #%config %{_sysconfdir}/layouts.d/power.conf.example #%config %{_sysconfdir}/layouts.d/power_cpufreq.conf.example #%config %{_sysconfdir}/layouts.d/unit.conf.example Execute prepare.sh. The "20" below indicates slurm20. ./prepare.sh 20 It will prepare the built environment and download the source. Load the CUDA 10 Toolkit module: module load cuda10.2/toolkit/10.2.89 Build the rpms. rpmbuild -bb slurm20.spec | tee build.log Wait a while, 15-20 minutes. RPMS will be available under /usr/src/redhat/RPMS/x86_64/ Check build.log to confirm NVML was detected: checking nvml.h usability… yes checking nvml.h presence… yes checking for nvml.h… yes checking for nvmlInit in -lnvidia-ml... yes Upgrade Process Upgrade steps from here in section "Upgrades": https://slurm.schedmd.com/quickstart_admin.html 1. shutdown slurmdbd daemon systemctl stop slurmdbd.service 2. Perform backup Retrieve SLURM MySQL database credentials like so: sudo grep ^Storage[Pass,User,Loc] /etc/slurm/slurmdbd.conf After retrieving the previous results run: mysqldump --single-transaction slurm_acct_db -u slurm -p > slurm-01222021.sql 3. Upgrade slurmdbd daemon, the rpms are here: /usr/src/redhat/RPMS/x86_64/ yum update /usr/src/redhat/RPMS/x86_64/slurm20-slurmdbd-20.02.6-mybuild.x86_64.rpm For debugging purpose you can run slurmdbd -D -vvv If you see this error change the permissions accordingly and note it's a symbolic link: slurmdbd: fatal: slurmdbd.conf file /etc/slurm/slurmdbd.conf should be 600 is 640 accessible for group or others If you see this error change the permission accordingly: slurmdbd: fatal: slurmdbd.conf not owned by SlurmUser root!=slurm 4. start slurmdbd systemctl start slurmdbd.service 5. Confirm that SlurmdTimeout and SlurmctldTimeout are set to 10 minutes, make a copy and edit file, if needed: grep -E "SlurmdTimeout|SlurmctldTimeout" slurm.conf SlurmctldTimeout=300 SlurmdTimeout=300 Confirm the value is set to 600. Run the next command if it was changed: scontrol reconfigure 6. shutdown slurmctld systemctl stop slurmctld 7. on compute nodes stop slurmd pdsh -w node00[1-3] systemctl stop slurmd 8. Make a backup of the StateSaveLocation directory: grep StateSaveLocation slurm.conf StateSaveLocation=/cm/shared/apps/slurm/var/cm/statesave 9. perform upgrade of slurmctl (slurm20) and slurmd(slurm20-client) Head node packages to upgrade: slurm20-20.02.6-mybuild.x86_64.rpm slurm20-perlapi-20.02.6-mybuild.x86_64.rpm slurm20-contribs-20.02.3-mybuild.x86_64 slurm20-client-20.02.6-mybuild.x86_64.rpm yum update *20.02.6*.rpm compute node package to update: slurm20-client-20.02.3-mybuild.x86_64 pdsh -w node[001-003] yum install /usr/src/redhat/RPMS/x86_64/slurm20-client-20.02.6-mybuild.x86_64.rpm -y 10. restart slurmd on compute nodes systemctl restart slurmd 11. restart slurmctld systemctl restart slurmctld 12. validate by submitting jobs and checking status of slurmd service on compute nodes 13. Verify and if needed restore slurmdtimout and slurmctldtimout values to 600 scontrol reconfig 14. Once confirmed slurm is working, update the Compute node image cp -a /usr/src/redhat/RPMS/x86_64/slurm20-client-20.02.6-mybuild.x86_64.rpm /cm/images/gpu_nvml/root chroot /cm/images/gpu_nvml yum update /usr/src/redhat/RPMS/x86_64/slurm20-client-20.02.6-mybuild.x86_64.rpm exit 15. Test with reboot ==============================
Autograder FAQs Error message: "An unexpected error occurred while grading your submission. This submission will not count towards your daily limit." Check the Docker logs for the ag-grader container, e.g., docker logs -f ag-grader Or start a django shell: docker exec -it ag-django python3 manage.py shell and then run: from autograder.core.models import Submission s = Submission.objects.get(pk=69) # use the ID shown in the error message on the website print(s.error_msg)
I see a process labeled beam.smp and a user id of smolt, e.g., when running 'top'. These are proceses belonging to Autograder and are safe. It belongs to the rabbitmq container.
In the browser I see 'Authorization Error Error 400: redirect_uri_mismatch' Make sure you chose "Web Application" in the Google Developer Console APIs & Services. Also check that the redirect URI includes "api/oauth2callback/" in the string.
PostgreSQL Database directory appears to contain a database; Skipping initialization FATAL: "/var/lib/postgresql/data" is not a valid data directory DETAIL: File "/var/lib/postgresql/data/PG_VERSION" does not contain valid data. HINT: You might need to initdb. Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) [...] File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known] You most likely did not change the Postgres version number to 13. You will have to delete the postgres volume. Stop Autograder, e.g., 'docker-compose -f docker-compose-single.yml stop' docker container prune docker volume rm autograder-full-stack_pgdata
In a browser you see "This site can’t be reached x.x.x.x refused to connect." "ERR_CONNECTION_REFUSED" Check if any other service is running on port 443.
When you try browsing an IP that is running inside the contaiiner that starts with 172.18.0, you get a '404 Not Found'. Check if any other service is running on port 443.
To add a super user from the CLI. open a Django shell: docker exec -it ag-django python3 manage.py shell from autograder.core.models import Course from django.contrib.auth.models import User me = User.objects.get(username='your-email@fordham.edu') me.is_superuser = True me.save() Course.objects.all().delete()
When trying to run a database backup via docker exec you get "the input device is not a TTY" First backup the database in the container. Then download it: docker exec -it ag-postgres pg_dump --username=postgres --format=c postgres -f /db_backup docker cp ag-postgres:/db_backup .
When trying to copy a database to the container and you get "the input device is not a TTY" Copy the file back into the container and restoring from there: docker cp db_backup ag-postgres:/ docker exec -i ag-postgres pg_restore --username=postgres --format=c -d postgres /db_backup
During a restore errors such as the following appear: pg_restore: while PROCESSING TOC: pg_restore: from TOC entry 207; 1259 16415 TABLE auth_group postgres pg_restore: error: could not execute query: ERROR: relation "auth_group" already exists Command was: CREATE TABLE public.auth_group ( id integer NOT NULL, name character varying(150) NOT NULL ); 1. In docker-compose-single.yml, at the bottom in the "volumes" block, add another volume called "pgdata2": ``` volumes: redisdata: {} pgdata: {} pgdata2: {} rabbitmqdata: {} sandbox_image_registry_data: {} ``` 2. In docker-compose-single.yml, under the "postgres" block, change the "volumes" entry to use pgdata2: ``` postgres: container_name: ag-postgres .... volumes: - pgdata2:/var/lib/postgresql/data/ ``` 3. Stop and start the stack 4. Copy the DB dump into the ag-postgres container and then restore it
When running docker build you see this warning: WARNING: Service "postgres" is using volume "/var/lib/postgresql/data" from the previous container. Host mapping "autograder-full-stack_pgdata8" has no effect. Remove the existing containers (with `docker-compose rm postgres`) to use the host volume mapping. run "docker rm ag-postgres" then re-run the restore command.
There are grayed out options in tests. Most likely a misconfiguration of the feedback settings. You probably want the "Normal" feedback preset set to "Private" for those gray tests and you shouuld change the preset for each of those test cases to reflect what you want the student to see.
A user see accounts.google.com refused to connect. Make sure there is no trailing slash in the URL https://storm.cis.fordham.edu:8443
smtplib.SMTPRecipientsRefused: {'xx@ourdomain.edu': (550, b'5.7.1 ... Relaying denied. IP name lookup failed [172.23.0.3]')} Either use the docker container bytemark/smtp or add Connect:172.23.0.3 RELAY in /etc/mail/access, then run makemap hash /etc/mail/access
smtplib.SMTPRecipientsRefused: {'xx@ourdomain.edu': (451, b'4.7.1 Greylisting in action, please come back later')} add the IP, e.g., 172.23.0.3, to /etc/mail/greylist.conf and restart milter-greylist.service
An error such as /etc/docker/daemon.json: json: cannot unmarshal string into Go value of type map[string]interface {} appears in /var/log/messages Check the syntax of the daemon.json file, should look like: { "log-driver": "json-file", "log-opts": { "max-size": "100m", "max-file": "5" } }
/var partition fills up with logs Run a command like docker system prune or docker builder prune. Note the former will remove: - all stopped containers - all networks not used by at least one container - all dangling images - all dangling build cache Docker fails to start, e.g., after a reboot with errors like: Nov 10 11:36:19 dockerd[17214]: time="2021-11-10T11:36:19.905814205-05:00" level=warning msg="could not create bridge network for id cc0e710fa0e8f75aa81fd13b8c381cfae8fae40d3f0febaab90fbfbffc77c463 bridge name docker0 while booting up from persistent state: Failed to program NAT chain: ZONE_CONFLICT: 'docker0' already bound to a zone" Nov 10 11:36:19 dockerd[17214]: time="2021-11-10T11:36:19.907495036-05:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 10 11:36:19 dockerd[17214]: time="2021-11-10T11:36:19.942956319-05:00" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Nov 10 11:36:19 dockerd[17214]: failed to start daemon: Error initializing network controller: Error creating default "bridge" network: Failed to program NAT chain: ZONE_CONFLICT: 'docker0' already bound to a zone See https://github.com/moby/moby/issues/41609 for suggestions such as: 1. remove docker0 from trusted interfaces sudo firewall-cmd --zone=trusted --remove-interface=docker0 sudo firewall-cmd --zone=trusted --remove-interface=docker0 --permanent 2. remove from /etc/firewalld/zones/trusted.xml and run systemcl restart firewalld
The script that collects project scores (as percentages) per student can be found here
It requires a Python module called autograder-contrib, which was installed via pip3 install autograder-contrib Then run the script, specifying the course ID and the base url that your deployment available at: python3 download_grades --base_url "https://storm.cis.fordham.edu:8443/" Note that you should not include "/api" in the url, as the script handles that. If you see an error like this: requests.exceptions.MissingSchema: Invalid URL '/api/courses/18/projects/': No schema supplied. Perhaps you meant http:///api/courses/18/projects/? You likely copied and pasted a rounded quote, e.g., “ and ” If you see an error such as the below: raise SSLError(e) requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129) This implies the intermediate or full certificate is missing from the website's SSL certificate, i.e., Let's Encrypt's certbot. The system administrator should use the full chain certificate in the Docker container within the ~/autograder-full-stack/nginx/production/conf.d/default.conf file. When runing docker-compose -f docker-compose-single.yml stop, if you get an error like this: ERROR: Couldn't find env file: /root/autograder-full-stack/autograder-server/schema/env The Submodule updates for 2022.01.v1 changed the schema so there's no more apidocs container. You can run mkdir /root/autograder-full-stack/autograder-server/schema/ and 'touch /root/autograder-full-stack/autograder-server/schema/env', then stop the containers. After updating via git pull I see Error occured requesting Server Error (500) Make sure to apply Django migrations, as this has to be run after every upgrade/update, e.g., git pull: docker exec -it ag-django python3 manage.py migrate Entering a course I see "Error occured requesting "/courses/##/projects/": 500" along with Server Error (500) Make sure that any desired files are in the courses 'project_files' directory, e.g., /root/autograder-full-stack/autograder-server/media_root/courses/course21/project271/project_files/input2.txt After running docker exec -it ag-django python3 manage.py migrate I see: Your models in app(s): 'core' have changes that are not yet reflected in a migration, and so won't be applied. Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them. You can ignore that warning. This happens due to changing certain default values like the maximum allowed timeout. Django detects it as a change, but it doesn't actually change anything at the database level. In maillog and/or Docker's ag-django container logs I see errors such as "Relaying denied. IP name lookup failed". See https://stackoverflow.com/questions/26215021/configure-sendmail-inside-a-docker-container for some suggestions. To work around the issue, I added 172.21.0.0/24 (the IP address space from within the container) to /etc/mail/access and ran 'makemap hash /etc/mail/access < /etc/mail/access' as suggested in https://stackoverflow.com/a/29550650