Skip to content

Let's Build It

This category is designed to offer practical advice, step by step guidance, and code suggestions for you to create you very own “Franken” build to suit your needs… “It’s Alive!!”

18 Topics 604 Posts

Subcategories


  • This category is designed to offer practical advice, step by step guidance, and code suggestions for you to create you very own “Franken” build to suit your needs… “It’s Alive!!”

    1 Topics
    8 Posts

    @veronikya it might flicker on load, yes, and that’s normal. It’s because the CSS and JS are loaded before the custom elements, but it shouldn’t be too bad on reasonable links.

    It shouldn’t flicker at all navigating through the site though as the CSS and JS are already loaded.

  • About this category

    Pinned
    5
    2 Votes
    5 Posts
    371 Views

    I’m going to be adding some new posts to the labs category, and will use this going forward when writing code that could easily be adopted by others (a great example is the OGProxy, which I will move here).

    If you have any ideas of would like a walkthrough of how to set something up, then this is the place it should go.

  • 1 Votes
    1 Posts
    66 Views
    No one has replied
  • 20 Votes
    110 Posts
    9k Views

    @crazycells said in Setup OGProxy for use in NodeBB:

    are they cached for each user separately?

    No. It’s a shared cache

    @crazycells said in Setup OGProxy for use in NodeBB:

    additionally, this is also handling youtube videos etc, right?

    No. This is handled by nodebb-plugin-ns-embed

  • 50 Votes
    107 Posts
    2k Views

    @crazycells

    image.png

    image.png

  • 50 Votes
    146 Posts
    18k Views

    Updated git for above change

    https://github.com/phenomlab/nodebb-harmony-threading/commit/14a4e277521d83d219065ffb14154fd5f5cfac69

  • 15 Votes
    27 Posts
    1k Views

    For anyone else coming here and is struggling to get pm2 to work with Umami (as I did - it started, but never seemed to work after a reboot which is pretty useless), you can use the below. Obviously, change the parts noted inside the [brackets]. Follow the below instructions:

    Instructions

    Open a terminal and create a new systemd service file:

    sudo nano /etc/systemd/system/umami.service

    Add the following content to the file:

    [Unit] Description=Umami Analytics Server After=network.target [Service] Type=simple User=[umami user] WorkingDirectory=[path to umami] ExecStart=/usr/local/bin/node [path to umami]/node_modules/.bin/next start Restart=on-failure [Install] WantedBy=multi-user.target

    Replace [umami user] with the username of the user that should run the Umami service, and [path to umami] with the actual path to your Umami installation.

    Save the file and exit the editor.

    Reload the systemd manager configuration:

    sudo systemctl daemon-reload

    Enable the Umami service to start on boot:

    sudo systemctl enable umami.service

    Start the Umami service:

    sudo systemctl start umami.service

    You can check the status of the service with:

    sudo systemctl status umami.service

    This systemd service file will ensure that Umami starts automatically when the system boots, and it will restart the service if it fails. Remember to adjust the WorkingDirectory and ExecStart paths according to where Umami is installed on your system, and ensure that Node.js is installed and accessible at /usr/bin/node (or adjust the path to Node.js as necessary).

  • Custom Board Stats Widget

    Moved
    28
    20 Votes
    28 Posts
    744 Views

    thanks Mark.

  • Block Domain

    Solved
    26
    1 Votes
    26 Posts
    1k Views

    Yes ogproxy too is functionnal on dev

  • 15 Votes
    51 Posts
    2k Views

    Oh yes, that’s what’s super cool, I learn something every day. Afterwards I start from so low in JS

  • 19 Votes
    35 Posts
    2k Views

    @DownPW said in Threaded chat support for NodeBB:

    Better like this : add shadow and border-left on self answer

    Of course - you style to your own requirements and taste 🙂 I’ll commit that CSS we discussed yesterday also

  • 14 Votes
    14 Posts
    638 Views

    Just circling back here as I’ve been helping @cagatay this morning on his site, and noticed that if you use a mixture of fa-brands and fa-solid then the code supplied above may produce some odd looking results.

    If this is the case, replace the function with this

    $(document).ready(function() { $.getJSON('/api/categories', function(data, status) { $.each(data.categories, function(key, value) { var iconClass = 'fa'; // Default to 'fa' if the icon type is not recognized // Check if the icon is FontAwesome Unicode if (this.icon.startsWith('&#x') || this.icon.startsWith('&#xf')) { iconClass = 'fa'; } else if (this.icon.startsWith('fab')) { iconClass = 'fab'; } var categorylist = $(" \ <li class='dropdown-item tree-root'><span class='category-menu'><i class='" + iconClass + " " + this.icon + "'></i><a style='display: inherit;' class='dropdown-item rounded-1' href='/category/" + this.slug + "'>" + this.name + "</a></span></li> \ <ul class='tree-branch' style='list-style: none;'>" + this.children.map(c => { var childIconClass = 'fa'; // Default to 'fa' for child icons // Check if the child icon is FontAwesome Unicode if (c.icon.startsWith('&#x') || c.icon.startsWith('&#xf')) { childIconClass = 'fas'; } else if (c.icon.startsWith('fab')) { childIconClass = 'fab'; } return `<li class='dropdown-item tree-node'><span class='category-menu-tree-node'><i class='${childIconClass} ${c.icon}'></i><a class='dropdown-item rounded-1' style='display: inherit;' href='/category/${c.slug}'>${c.name}</a></span></li>`; }).join(" ") + "</ul>" ); if ($(window).width() < 767) { $(".bottombar #thecategories").append(categorylist); } else { $(".sidebar-left #thecategories").append(categorylist); } }); }); });

    In fact, if you want to replace it anyway to make your experience “future proof”, you can use this code now 🙂

  • Rotating Star Effect

    Solved
    17
    12 Votes
    17 Posts
    461 Views

    @phenomlab thanks a lot for these, both of the below are awesome! ♥

    https://codepen.io/bennettfeely/pen/vYLmYJz

    https://codepen.io/C-L-the-selector/pen/MWZbWBo

  • 5 Votes
    2 Posts
    201 Views

    Nice. Very good tips Mark 😁

  • 11 Votes
    14 Posts
    596 Views

    @dave1904 excellent news. Thanks for the feedback

  • Corporate Bullshit Generator

    Moved
    4
    4 Votes
    4 Posts
    432 Views

    @crazycells yeah, I’ve noticed that as well. What is it with buzzwords? All they signify to me is that you can “talk the talk” but not “walk the walk”. Anyone can read something on the internet, and after a while, probably recite it word for word, but do they truly understand it?

    I interviewed a guy once who certainly knew buzzwords like the back of his hand, but failed a technical test miserably.

  • MongoDB backup script

    Moved
    4
    6 Votes
    4 Posts
    268 Views

    After playing with the original code in the first post, I’ve modified it a little to suit my needs - posting it here in it’s full glory in case it’s of use for anyone else. Obviously, you’ll need to change the important parts

    #!/bin/bash #set -eo pipefail # # MongoDB Backup Script # VER. 0.20 # More Info: http://github.com/micahwedemeyer/automongobackup # Note, this is a lobotomized port of AutoMySQLBackup # (http://sourceforge.net/projects/automysqlbackup/) for use with # MongoDB. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # #===================================================================== #===================================================================== # Set the following variables to your system needs # (Detailed instructions below variables) #===================================================================== # Database name to specify a specific database only e.g. myawesomeapp # Unnecessary if backup all databases # DBNAME="" # Collections name list to include e.g. system.profile users # DBNAME is required # Unecessary if backup all collections # COLLECTIONS="" # Collections to exclude e.g. system.profile users # DBNAME is required # Unecessary if backup all collections # EXCLUDE_COLLECTIONS="" # Username to access the mongo server e.g. dbuser # Unnecessary if authentication is off DBUSERNAME="" # Password to access the mongo server e.g. password # Unnecessary if authentication is off DBPASSWORD="" # Database for authentication to the mongo server e.g. admin # Unnecessary if authentication is off DBAUTHDB="admin" # Host name (or IP address) of mongo server e.g localhost DBHOST="127.0.0.1" # Port that mongo is listening on DBPORT="27017" # Backup directory location e.g /backups BACKUPDIR="" # Mail setup # What would you like to be mailed to you? # - log : send only log file # - files : send log file and sql files as attachments (see docs) # - stdout : will simply output the log to the screen if run manually. # - quiet : Only send logs if an error occurs to the MAILADDR. MAILCONTENT="log" # Set the maximum allowed email size in k. (4000 = approx 5MB email [see docs]) export MAXATTSIZE="4000" # Email Address to send mail to? (user@domain.com) MAILADDR="" # ============================================================================ # === SCHEDULING AND RETENTION OPTIONS ( Read the doc's below for details )=== #============================================================================= # Do you want to do hourly backups? How long do you want to keep them? DOHOURLY="no" HOURLYRETENTION=24 # Do you want to do daily backups? How long do you want to keep them? DODAILY="yes" DAILYRETENTION=7 # Which day do you want weekly backups? (1 to 7 where 1 is Monday) DOWEEKLY="no" WEEKLYDAY=6 WEEKLYRETENTION=4 # Do you want monthly backups? How long do you want to keep them? DOMONTHLY="no" MONTHLYRETENTION=4 # ============================================================ # === ADVANCED OPTIONS ( Read the doc's below for details )=== #============================================================= # Choose Compression type. (gzip or bzip2) COMP="gzip" # Choose if the uncompressed folder should be deleted after compression has completed CLEANUP="yes" # Additionally keep a copy of the most recent backup in a seperate directory. LATEST="yes" # Make Hardlink not a copy LATESTLINK="yes" # Use oplog for point-in-time snapshotting. OPLOG="no" # Choose other Server if is Replica-Set Master REPLICAONSLAVE="no" # Allow DBUSERNAME without DBAUTHDB REQUIREDBAUTHDB="yes" # Maximum files of a single backup used by split - leave empty if no split required # MAXFILESIZE="" # Command to run before backups (uncomment to use) # PREBACKUP="" # Command run after backups (uncomment to use) # POSTBACKUP="" #===================================================================== # Options documentation #===================================================================== # Set DBUSERNAME and DBPASSWORD of a user that has at least SELECT permission # to ALL databases. # # Set the DBHOST option to the server you wish to backup, leave the # default to backup "this server".(to backup multiple servers make # copies of this file and set the options for that server) # # You can change the backup storage location from /backups to anything # you like by using the BACKUPDIR setting.. # # The MAILCONTENT and MAILADDR options and pretty self explanatory, use # these to have the backup log mailed to you at any email address or multiple # email addresses in a space seperated list. # # (If you set mail content to "log" you will require access to the "mail" program # on your server. If you set this to "files" you will have to have mutt installed # on your server. If you set it to "stdout" it will log to the screen if run from # the console or to the cron job owner if run through cron. If you set it to "quiet" # logs will only be mailed if there are errors reported. ) # # # Finally copy automongobackup.sh to anywhere on your server and make sure # to set executable permission. You can also copy the script to # /etc/cron.daily to have it execute automatically every night or simply # place a symlink in /etc/cron.daily to the file if you wish to keep it # somwhere else. # # NOTE: On Debian copy the file with no extention for it to be run # by cron e.g just name the file "automongobackup" # # Thats it.. # # # === Advanced options === # # To set the day of the week that you would like the weekly backup to happen # set the WEEKLYDAY setting, this can be a value from 1 to 7 where 1 is Monday, # The default is 6 which means that weekly backups are done on a Saturday. # # Use PREBACKUP and POSTBACKUP to specify Pre and Post backup commands # or scripts to perform tasks either before or after the backup process. # # #===================================================================== # Backup Rotation.. #===================================================================== # # Hourly backups are executed if DOHOURLY is set to "yes". # The number of hours backup copies to keep for each day (i.e. 'Monday', 'Tuesday', etc.) is set with DHOURLYRETENTION. # DHOURLYRETENTION=0 rotates hourly backups every day (i.e. only the most recent hourly copy is kept). -1 disables rotation. # # Daily backups are executed if DODAILY is set to "yes". # The number of daily backup copies to keep for each day (i.e. 'Monday', 'Tuesday', etc.) is set with DAILYRETENTION. # DAILYRETENTION=0 rotates daily backups every week (i.e. only the most recent daily copy is kept). -1 disables rotation. # # Weekly backups are executed if DOWEEKLY is set to "yes". # WEEKLYDAY [1-7] sets which day a weekly backup occurs when cron.daily scripts are run. # Rotate weekly copies after the number of weeks set by WEEKLYRETENTION. # WEEKLYRETENTION=0 rotates weekly backups every week. -1 disables rotation. # # Monthly backups are executed if DOMONTHLY is set to "yes". # Monthy backups occur on the first day of each month when cron.daily scripts are run. # Rotate monthly backups after the number of months set by MONTHLYRETENTION. # MONTHLYRETENTION=0 rotates monthly backups upon each execution. -1 disables rotation. # #===================================================================== # Please Note!! #===================================================================== # # I take no resposibility for any data loss or corruption when using # this script. # # This script will not help in the event of a hard drive crash. You # should copy your backups offline or to another PC for best protection. # # Happy backing up! # #===================================================================== # Restoring #===================================================================== # ??? # #===================================================================== # Change Log #===================================================================== # VER 0.11 - (2016-05-04) (author: Claudio Prato) # - Fixed bugs in select_secondary_member() with authdb enabled # - Fixed bugs in Compression function by removing the * symbol # - Added incremental backup feature # - Added option to select the collections to backup # # VER 0.10 - (2015-06-22) (author: Markus Graf) # - Added option to backup only one specific database # # VER 0.9 - (2011-10-28) (author: Joshua Keroes) # - Fixed bugs and improved logic in select_secondary_member() # - Fixed minor grammar issues and formatting in docs # # VER 0.8 - (2011-10-02) (author: Krzysztof Wilczynski) # - Added better support for selecting Secondary member in the # Replica Sets that can be used to take backups without bothering # busy Primary member too much. # # VER 0.7 - (2011-09-23) (author: Krzysztof Wilczynski) # - Added support for --journal dring taking backup # to enable journaling. # # VER 0.6 - (2011-09-15) (author: Krzysztof Wilczynski) # - Added support for --oplog during taking backup for # point-in-time snapshotting. # - Added filter for "mongodump" writing "connected to:" # on the standard error, which is not desirable. # # VER 0.5 - (2011-02-04) (author: Jan Doberstein) # - Added replicaset support (don't Backup on Master) # - Added Hard Support for 'latest' Copy # # VER 0.4 - (2010-10-26) # - Cleaned up warning message to make it clear that it can # usually be safely ignored # # VER 0.3 - (2010-06-11) # - Added the DBPORT parameter # - Changed USERNAME and PASSWORD to DBUSERNAME and DBPASSWORD # - Fixed some bugs with compression # # VER 0.2 - (2010-05-27) (author: Gregory Barchard) # - Added back the compression option for automatically creating # tgz or bz2 archives # - Added a cleanup option to optionally remove the database dump # after creating the archives # - Removed unnecessary path additions # # VER 0.1 - (2010-05-11) # - Initial Release # # VER 0.2 - (2015-09-10) # - Added configurable backup rentention options, even for # monthly backups. # #===================================================================== #===================================================================== #===================================================================== # # Should not need to be modified from here down!! # #===================================================================== #===================================================================== #===================================================================== shellout () { if [ -n "$1" ]; then echo "$1" exit 1 fi exit 0 } # External config - override default values set above for x in default sysconfig; do if [ -f "/etc/$x/automongobackup" ]; then # shellcheck source=/dev/null source /etc/$x/automongobackup fi done # Include extra config file if specified on commandline, e.g. for backuping several remote dbs from central server # shellcheck source=/dev/null [ ! -z "$1" ] && [ -f "$1" ] && source ${1} #===================================================================== PATH=/usr/local/bin:/usr/bin:/bin timestamp=$(date +%F-%T) DATE=$(date +%F-%H%M%S) # Datestamp e.g 2002-09-21 HOD=$(date +%s) # Current timestamp for PITR backup DOW=$(date +%A) # Day of the week e.g. Monday DNOW=$(date +%u) # Day number of the week 1 to 7 where 1 represents Monday DOM=$(date +%d) # Date of the Month e.g. 27 M=$(date +%B) # Month e.g January W=$(date +%V) # Week Number e.g 37 VER=0.11 # Version Number LOGFILE=$BACKUPDIR/$DBHOST-$(date +%F-%H%M%S).log # Logfile Name LOGERR=$BACKUPDIR/ERRORS_$DBHOST-$(date +%F-%H%M%S).log # Logfile Name OPT="" # OPT string for use with mongodump OPTSEC="" # OPT string for use with mongodump in select_secondary_member function QUERY="" # QUERY string for use with mongodump HOURLYQUERY="" # HOURLYQUERY string for use with mongodump # Do we need to use a username/password? if [ "$DBUSERNAME" ]; then OPT="$OPT --username=$DBUSERNAME --password=$DBPASSWORD" if [ "$REQUIREDBAUTHDB" = "yes" ]; then OPT="$OPT --authenticationDatabase=$DBAUTHDB" fi fi # Do we need to use a username/password for ReplicaSet Secondary Members Selection? if [ "$DBUSERNAME" ]; then OPTSEC="$OPTSEC --username=$DBUSERNAME --password=$DBPASSWORD" if [ "$REQUIREDBAUTHDB" = "yes" ]; then OPTSEC="$OPTSEC --authenticationDatabase=$DBAUTHDB" fi fi # Do we use oplog for point-in-time snapshotting? if [ "$OPLOG" = "yes" ] && [ -z "$DBNAME" ]; then OPT="$OPT --oplog" fi # Do we need to backup only a specific database? if [ "$DBNAME" ]; then OPT="$OPT -d $DBNAME" fi # Do we need to backup only a specific collections? if [ "$COLLECTIONS" ]; then for x in $COLLECTIONS; do OPT="$OPT --collection $x" done fi # Do we need to exclude collections? if [ "$EXCLUDE_COLLECTIONS" ]; then for x in $EXCLUDE_COLLECTIONS; do OPT="$OPT --excludeCollection $x" done fi # Do we use a filter for hourly point-in-time snapshotting? if [ "$DOHOURLY" == "yes" ]; then # getting PITR START timestamp # shellcheck disable=SC2012 [ "$COMP" = "gzip" ] && HOURLYQUERY=$(ls -t $BACKUPDIR/hourly | head -n 1 | cut -d '.' -f3) # setting the start timestamp to NOW for the first execution if [ -z "$HOURLYQUERY" ]; then QUERY="" else # limit the documents included in the output of mongodump # shellcheck disable=SC2016 QUERY='{ "ts" : { $gt : Timestamp('$HOURLYQUERY', 1) } }' fi fi # Create required directories mkdir -p $BACKUPDIR/{hourly,daily,weekly,monthly} || shellout 'failed to create directories' if [ "$LATEST" = "yes" ]; then rm -rf "$BACKUPDIR/latest" mkdir -p "$BACKUPDIR/latest" || shellout 'failed to create directory' fi # Do we use a filter for hourly point-in-time snapshotting? if [ "$DOHOURLY" == "yes" ]; then # getting PITR START timestamp # shellcheck disable=SC2012 [ "$COMP" = "gzip" ] && HOURLYQUERY=$(ls -t $BACKUPDIR/hourly | head -n 1 | cut -d '.' -f3) # setting the start timestamp to NOW for the first execution if [ -z "$HOURLYQUERY" ]; then QUERY="" else # limit the documents included in the output of mongodump # shellcheck disable=SC2016 QUERY='{ "ts" : { $gt : Timestamp('$HOURLYQUERY', 1) } }' fi fi # Check for correct sed usage if [ "$(uname -s)" = 'Darwin' ] || [ "$(uname -s)" = 'FreeBSD' ]; then SED="sed -i ''" else SED="sed -i" fi # IO redirection for logging. touch "$LOGFILE" exec 6>&1 # Link file descriptor #6 with stdout. # Saves stdout. exec > "$LOGFILE" # stdout replaced with file $LOGFILE. touch "$LOGERR" exec 7>&2 # Link file descriptor #7 with stderr. # Saves stderr. exec 2> "$LOGERR" # stderr replaced with file $LOGERR. # When a desire is to receive log via e-mail then we close stdout and stderr. [ "x$MAILCONTENT" == "xlog" ] && exec 6>&- 7>&- # Functions # Database dump function dbdump () { if [ -n "$QUERY" ]; then # filter for point-in-time snapshotting and if DOHOURLY=yes # shellcheck disable=SC2086 mongodump --quiet --host=$DBHOST:$DBPORT --out="$1" $OPT -q "$QUERY" MDUMPSTATUS=$? else # all others backups type # shellcheck disable=SC2086 mongodump --quiet --host=$DBHOST:$DBPORT --out="$1" $OPT MDUMPSTATUS=$? fi if [ $MDUMPSTATUS -ne 0 ]; then echo "$timestamp - ERROR: mongodump failed: $1" >&2 return 1 fi [ -e "$1" ] && return 0 echo "$timestamp - ERROR: mongodump failed to create dumpfile: $1" >&2 return 1 } # # Select first available Secondary member in the Replica Sets and show its # host name and port. # function select_secondary_member { # We will use indirect-reference hack to return variable from this function. local __return=$1 # Return list of with all replica set members # shellcheck disable=SC2086 members=( $(mongo --quiet --host $DBHOST:$DBPORT --eval 'rs.conf().members.forEach(function(x){ print(x.host) })' $OPTSEC ) ) # Check each replset member to see if it's a secondary and return it. if [ ${#members[@]} -gt 1 ]; then for member in "${members[@]}"; do is_secondary=$(mongo --quiet --host "$member" --eval 'rs.isMaster().secondary' $OPTSEC ) case "$is_secondary" in 'true') # First secondary wins ... secondary=$member break ;; 'false') # Skip particular member if it is a Primary. continue ;; *) # Skip irrelevant entries. Should not be any anyway ... continue ;; esac done fi if [ -n "$secondary" ]; then # Ugly hack to return value from a Bash function ... # shellcheck disable=SC2086 eval $__return="'$secondary'" fi } if [ -n "$MAXFILESIZE" ]; then write_file() { split --bytes "$MAXFILESIZE" --numeric-suffixes - "${1}-" } else write_file() { cat > "$1" } fi # Compression function plus latest copy compression () { SUFFIX="" dir=$(dirname "$1") file=$(basename "$1") if [ -n "$COMP" ]; then [ "$COMP" = "gzip" ] && SUFFIX=".tgz" [ "$COMP" = "bzip2" ] && SUFFIX=".tar.bz2" echo $timestamp - Tar and $COMP to "$file$SUFFIX" cd "$dir" || return 1 tar -cf - "$file" | $COMP --stdout | write_file "${file}${SUFFIX}" cd - >/dev/null || return 1 else echo "$timestamp - No compression option set, check advanced settings" fi if [ "$LATEST" = "yes" ]; then if [ "$LATESTLINK" = "yes" ];then COPY="ln" else COPY="cp" fi $COPY "$1$SUFFIX" "$BACKUPDIR/latest/" fi if [ "$CLEANUP" = "yes" ]; then echo $timestamp - Cleaning up folder at "$1" rm -rf "$1" fi return 0 } # Run command before we begin if [ "$PREBACKUP" ]; then echo "$timestamp - Prebackup command output." eval "$PREBACKUP" echo fi # Hostname for LOG information if [ "$DBHOST" = "localhost" ] || [ "$DBHOST" = "127.0.0.1" ]; then HOST=$(hostname) if [ "$SOCKET" ]; then OPT="$OPT --socket=$SOCKET" fi else HOST=$DBHOST fi # Try to select an available secondary for the backup or fallback to DBHOST. if [ "x${REPLICAONSLAVE}" == "xyes" ]; then # Return value via indirect-reference hack ... select_secondary_member secondary if [ -n "$secondary" ]; then DBHOST=${secondary%%:*} DBPORT=${secondary##*:} else SECONDARY_WARNING="WARNING: No suitable Secondary found in the Replica Sets. Falling back to ${DBHOST}." fi fi if [ ! -z "$SECONDARY_WARNING" ]; then echo echo "$SECONDARY_WARNING" fi echo "$timestamp - Database Server - $HOST on $DBHOST" echo "$timestamp - Backup started $(date)" # Monthly Full Backup of all Databases if [[ $DOM = "01" ]] && [[ $DOMONTHLY = "yes" ]]; then echo "$timestamp - Monthly Full Backup" echo # Delete old monthly backups while respecting the set rentention policy. if [[ $MONTHLYRETENTION -ge 0 ]] ; then NUM_OLD_FILES=$(find $BACKUPDIR/monthly -depth -not -newermt "$MONTHLYRETENTION month ago" -type f | wc -l) if [[ $NUM_OLD_FILES -gt 0 ]] ; then echo $timestamp - Deleting "$NUM_OLD_FILES" global setting backup file\(s\) older than "$MONTHLYRETENTION" month\(s\) old. find $BACKUPDIR/monthly -not -newermt "$MONTHLYRETENTION month ago" -type f -delete fi fi FILE="$BACKUPDIR/monthly/$DATE.$M" # Weekly Backup elif [[ "$DNOW" = "$WEEKLYDAY" ]] && [[ "$DOWEEKLY" = "yes" ]] ; then echo Weekly Backup echo if [[ $WEEKLYRETENTION -ge 0 ]] ; then # Delete old weekly backups while respecting the set rentention policy. NUM_OLD_FILES=$(find $BACKUPDIR/weekly -depth -not -newermt "$WEEKLYRETENTION week ago" -type f | wc -l) if [[ $NUM_OLD_FILES -gt 0 ]] ; then echo $timestamp - Deleting "$NUM_OLD_FILES" global setting backup file\(s\) older than "$WEEKLYRETENTION" week\(s\) old. find $BACKUPDIR/weekly -not -newermt "$WEEKLYRETENTION week ago" -type f -delete fi fi FILE="$BACKUPDIR/weekly/week.$W.$DATE" # Daily Backup elif [[ $DODAILY = "yes" ]] ; then echo $timestamp - Daily Backup of Databases echo # Delete old daily backups while respecting the set rentention policy. if [[ $DAILYRETENTION -ge 0 ]] ; then NUM_OLD_FILES=$(find $BACKUPDIR/daily -depth -not -newermt "$DAILYRETENTION days ago" -type f | wc -l) if [[ $NUM_OLD_FILES -gt 0 ]] ; then echo Deleting "$NUM_OLD_FILES" global setting backup file\(s\) made in previous weeks. find "$BACKUPDIR/daily" -not -newermt "$DAILYRETENTION days ago" -type f -delete fi fi # Below changed to remove .$DOW as this is not required FILE="$BACKUPDIR/daily/$DATE" # Hourly Backup elif [[ $DOHOURLY = "yes" ]] ; then echo $timestamp - Hourly Backup of Databases echo # Delete old hourly backups while respecting the set rentention policy. if [[ $HOURLYRETENTION -ge 0 ]] ; then NUM_OLD_FILES=$(find $BACKUPDIR/hourly -depth -not -newermt "$HOURLYRETENTION hour ago" -type f | wc -l) if [[ $NUM_OLD_FILES -gt 0 ]] ; then echo "$timestamp - Deleting $NUM_OLD_FILES global setting backup file\(s\) made in previous weeks." find $BACKUPDIR/hourly -not -newermt "$HOURLYRETENTION hour ago" -type f -delete fi fi FILE="$BACKUPDIR/hourly/$DATE.$DOW.$HOD" # convert timestamp to date: echo $TIMESTAMP | gawk '{print strftime("%c", $0)}' fi # FILE will not be set if no frequency is selected. if [[ -z "$FILE" ]] ; then echo "$timestamp - ERROR: No backup frequency was chosen." echo "$timestamp - Please set one of DOHOURLY,DODAILY,DOWEEKLY,DOMONTHLY to \"yes\"" exit 1 fi dbdump "$FILE" && compression "$FILE" echo "$timestamp - Backup completed $(date)" echo echo "Total disk space used for backup storage" echo echo "Size - Location " du -hs "$BACKUPDIR" echo # Run command when we're done if [ "$POSTBACKUP" ]; then echo "$timestamp - Postbackup command output." echo eval "$POSTBACKUP" echo fi # Clean up IO redirection if we plan not to deliver log via e-mail. [ ! "x$MAILCONTENT" == "xlog" ] && exec 1>&6 2>&7 6>&- 7>&- if [ -s "$LOGERR" ]; then eval "$SED" "/^connected/d" "$LOGERR" fi if [ "$MAILCONTENT" = "log" ]; then mail -s "Mongo Backup Log for $HOST - $DATE" "$MAILADDR" < "$LOGFILE" if [ -s "$LOGERR" ]; then cat "$LOGERR" mail -s "ERRORS REPORTED: Mongo Backup error Log for $HOST - $DATE" "$MAILADDR" < "$LOGERR" fi else if [ -s "$LOGERR" ]; then cat "$LOGFILE" echo echo "$timestamp - ###### WARNING ######" echo "$timestamp - STDERR written to during mongodump execution." echo "$timestamp - The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below:" cat "$LOGERR" else cat "$LOGFILE" fi fi # TODO: Would be nice to know if there were any *actual* errors in the $LOGERR STATUS=0 if [ -s "$LOGERR" ]; then STATUS=1 fi # Clean up Logfile rm -f "$LOGFILE" "$LOGERR" exit $STATUS

    One other thing I forgot to mention here is that you’ll need mailutils installed on your server for the mail option to work 🙂

  • 7 Votes
    7 Posts
    390 Views

    Oh yeah great 🙂