theLink 10.0
|
Run the performance tests created previously with build.bash.
usage: performance.bash ?options...? ?setup...|ALL? ?tests...|ALL? options: -old # display the old results : info -strace # filter for client/server : open files -strace-server # filter for server : open files -vg # use valgrind : prefix -x|-v # set bash debug flag : debugging -debug # be verbose : debugging -h|--help # get this message : help setup: ...................................................... perf-release|pr|r # perf-release setup : shared & threads & NORMAL optimization perf-aggressive|pa|a # perf-aggressive setup : static & NO threads & AGGRESSIVE optimization tests: ...................................................... c_pipe c_uds_fork c_uds_thread c_uds_spawn cc_pipe cc_uds_fork cc_uds_thread cc_uds_spawn tcl_pipe tcl_uds_fork tcl_uds_thread tcl_uds_spawn atl_pipe atl_uds_fork atl_uds_thread atl_uds_spawn jv_pipe jv_uds_thread jv_uds_spawn cs_pipe cs_uds_thread cs_uds_spawn py_pipe py_uds_fork py_uds_spawn rb_pipe rb_uds_fork rb_uds_spawn cs_pipe cs_uds_thread cs_uds_spawn Run the performance tests created previously with build.bash: installation directory : WORK_DIR_BUILD/inst/TARGET/SETUP build directory : NHI1_BUILD/TARGET/SETUP setup(s) : perf-release perf-aggressive target(s) : x86_64-suse-linux-gnu The 'setup' is filtered out of the arguments. If NO 'setup' is found then ALL 'setups' are used. The 'tests' are filtered out of the arguments and applied to the list of possible 'tests' using a regular expression. If NO 'tests' is found then ALL 'tests' are used. The 'target' is always the 'target' that matches the executing machine defined by NHI1_target. All arguments that are NOT used by 'performance.bash' are used as 'perfclient' arguments. The performance is measured in transactions per second, whereby the tests run for 2 seconds each by default (--sec 2) The tests can be adjusted via the command line, whereby only tests with the option '--all' are later taken into account in the 'total_link.perf'. Example: SEND persistent on 'perf-release' for 'c_uds_thread' target > ./performance.bash --send-persistent r c_uds_thread SEND on 'perf-release' for all 'thread' targets > ./performance.bash --send r _thread BUS DATA on 'perf-aggressive' for 'c_pipe' and 'cc_pipe' > ./performance.bash --bus a ^c_pipe ^cc_pipe > ./performance.bash --bus a '^(c|cc)_pipe' perfclient usage: > usage: perfclient [OPTION]... [ARGUMENT]... > > This tool is the client part of the performance test toolkit and expect > the 'perfserver' as argument. > > The following tests are defined: > --all Do all the following tests. > --all-performance Do all tests relevant for performance testing. > --all-no-parent Do all the following tests but without parent. > --send-nothing Send empty package just to test the service callback. > --send The data is send from the client to the server using ... > 1. a 1 byte char > 2. a 2 byte short > 3. a 4 byte integer > 4. a 8 byte double > 5. a binary of size between 1 and 1000 bytes > --send-string Same as '--send' but use 'string-data' only > --send-and-wait Same as '--send' but use 'MqSendEND_AND_WAIT' -> DEFAULT > --send-and-callback Same as '--send' but use 'MqSendEND_AND_CALLBACK' > --send-persistent Use a persistent-transaction together with 'MqSendEND_AND_WAIT' > --storage STRING database file, #memdb# or #tmpdb# (default: file) > --parent Create and Delete a PARENT-context in a loop > --spawn|--thread|--fork choose starter (default: lng-specific) > --wrk NUMBER number of parallel workers (default: 1) > --child Create and Delete a CHILD-context in a loop > --bus or --bfl Create and Delete a MkBufferStreamS or MkBufferListS in a loop > --bin/str send ad wait for binary/string data > > perfclient [ARGUMENT]... syntax: > perfclient [OPTION]... @ server [OPTION]... [ARGUMENT] > > msgque [OPTION]: > --help-msgque print msgque specific help > > perfclient [OPTION]: > --num NUMBER number of test-cycles (default: -1) > --sec SECONDS seconds per test (default: 2) > --timeout-event SECONDS timeout for background wait (default: 3) > -h, --help print this help >
#!/bin/bash #+ #: @file NHI1/performance/performance.bash #: @brief tag: nhi1-release-250425 #: @copyright (C) NHI - #1 - Project - Group #: This software has NO permission to copy, #: please contact AUTHOR for additional information #: # shellcheck disable=SC2016,SC2031 # find top-level directory Nhi1Env -silent || exit 1 set -o nounset set -o errexit set -o pipefail trap 'error "performance test failed"' ERR initial_argsL=("$@") # stdlib.bash # shellcheck source=./libperf.bash source "$NHI1_HOME/performance/libperf.bash" typeset -i PORT=7777 typeset -i UDS=0 typeset -i isServer=0 function stderr { echo "stderr: $*" 1>&2; } # shellcheck disable=SC2120 function usage { if (( $# )) ; then exec 1>&2 ; echo "ERROR: $*" ; fi usage_default.tcl "$cmd0" "?setup...|ALL? ?tests...|ALL?" echo printTESTS " tests: ......................................................" echo cat <<EOF Run the performance tests created previously with build.bash: installation directory : WORK_DIR_BUILD/$perfdir/TARGET/SETUP build directory : NHI1_BUILD/TARGET/SETUP setup(s) : $(setupList) target(s) : $(targetList) The 'setup' is filtered out of the arguments. If NO 'setup' is found then ALL 'setups' are used. The 'tests' are filtered out of the arguments and applied to the list of possible 'tests' using a regular expression. If NO 'tests' is found then ALL 'tests' are used. The 'target' is always the 'target' that matches the executing machine defined by NHI1_target. All arguments that are NOT used by 'performance.bash' are used as 'perfclient' arguments. The performance is measured in transactions per second, whereby the tests run for 2 seconds each by default (--sec 2) The tests can be adjusted via the command line, whereby only tests with the option '--all' are later taken into account in the 'total_link.perf'. Example: SEND persistent on 'perf-release' for 'c_uds_thread' target > ./performance.bash --send-persistent r c_uds_thread SEND on 'perf-release' for all 'thread' targets > ./performance.bash --send r _thread BUS DATA on 'perf-aggressive' for 'c_pipe' and 'cc_pipe' > ./performance.bash --bus a ^c_pipe ^cc_pipe > ./performance.bash --bus a '^(c|cc)_pipe' EOF if hash perfclient ; then echo -e "\n perfclient usage:" perfclient -h |& sed -e 's/^/ > /' fi (( $# )) && exit 1 || exit 0 } function mydebug() { debug "$*" | msg_filter | sed -e 's/^/ /' } function mykill { debug "kill -$*" if [[ -d "/proc/$2" ]] ; then /usr/bin/kill "$@" 2>/dev/null ; fi true } myfailed=() myfail() { local msg="[$setup:$feature] → $*" myfailed+=("$msg") warning "$msg" } function killtree { local _pid=$1 local _sig=${2:-KILL} local _skip_stop=${3:-3} # needed to stop quickly forking parent from producing a new child between child killing and parent killing if (( _skip_stop <= 0 )) ; then mykill -stop "${_pid}"; fi for _child in $(ps -o pid --no-headers --ppid "${_pid}"); do killtree "${_child}" "${_sig}" $((_skip_stop-1)) done mykill "-${_sig}" "${_pid}" true } function filter { #apply the commandline filter # local shell because "env-build.sh" is sourced ( export NHI1_setup="$1"; shift # shellcheck source=../env-debug.sh source "$build_root_dir/$NHI1_setup/env-build.sh" # tgtAllL mean everything R=() #loop over all setupL - filter items for A ; do args_split_by B _ "$A" # first item is the language, uppercase, USE_ prefix # shellcheck disable=SC2154 a="USE_${B[0]^^}" # check if language is supported in setup if [[ "${!a}" == "no" ]] ; then continue; fi R+=("$A") done if (( ${#R[@]} == 0 )) ; then stderr "filter: nothing found to report" ; fi echo "${R[@]}" ) } printTESTS() { (( $# )) && echo "$*" sp="${1%%[![:space:]]*}" ( old="" for x in "${tgtAllL[@]}"; do pre="${x%%_*}" if [[ "$pre" != "$old" ]] ; then echo -en "\n " old="$pre" fi echo -n "$x " done ) | column -t | sed "s/^/$sp /" } tgtAllL=(c_pipe c_uds_fork c_uds_thread c_uds_spawn cc_pipe cc_uds_fork cc_uds_thread cc_uds_spawn tcl_pipe tcl_uds_fork tcl_uds_thread tcl_uds_spawn atl_pipe atl_uds_fork atl_uds_thread atl_uds_spawn jv_pipe jv_uds_thread jv_uds_spawn cs_pipe cs_uds_thread cs_uds_spawn py_pipe py_uds_fork py_uds_spawn rb_pipe rb_uds_fork rb_uds_spawn cs_pipe cs_uds_thread cs_uds_spawn) # ( # pl_pipe # pl_uds_fork # pl_uds_spawn # vb_pipe # vb_uds_thread # vb_uds_spawn # php_pipe # php_uds_fork # php_uds_spawn) #brain_pipe #brain_uds_fork #brain_uds_thread #brain_uds_spawn #vb_pipe #vb_uds_thread #vb_uds_spawn # declare -p tgtAllL ## ========================================================================================== ## MAIN NUM=() #NUM=(--num 10 ) prefixL=() prefixServerL=() otherL=() oldB=0 set - "${initial_argsL[@]}" while (( $# > 0 )) ; do case "$1" in -old) # display the old results : info oldB=1 ;; -strace) # filter for client/server : open files prefixL=('strace' '-e' 'trace=openat') ;; -strace-server) # filter for server : open files prefixServerL=('strace' '-e' 'trace=openat') ;; -vg) # use valgrind : prefix error "-vg not implemented" prefixL=(--prefix=vg) ;; -x|-v) # set bash debug flag : debugging set "$1" ;; -debug) # be verbose : debugging debugI=1 ;; -h|--help) # get this message : help usage ;; *) otherL+=("$1") ;; esac shift done #if (( ${#otherL[@]} == 0 )) ; then usage "expect min ONE argument" ; fi #include ./libperf.bash parse_args "${otherL[@]}" mydebug "setupL=${setupL[*]}, argsL=${argsL[*]}" if (( ${#argsL[@]} == 0 )) ; then argsL=("ALL") ; fi # analyze arguments tgtFoundL=() argFoundL=() for b in "${argsL[@]}" ; do if [[ "$b" = "ALL" ]] ; then tgtFoundL=("${tgtAllL[@]}") continue fi foundB=0 for a in "${tgtAllL[@]}" ; do if [[ "$a" =~ $b ]] ; then foundB=1 tgtFoundL+=("$a") fi done if (( !foundB )) ; then argFoundL+=("$b") fi done unset a b if (( ${#tgtFoundL[@]} == 0 )) ; then tgtFoundL=("${tgtAllL[@]}") else # make unique mapfile -t tgtFoundL < <(tr ' ' '\n' <<< "${tgtFoundL[@]}" | sort -u) if (( "${#tgtFoundL[@]}" == 0 )) ; then echo printTESTS "ERROR: nothing found with re pattern '${argsL[*]}', available 'arguments' are:" 1>&2 exit 1 fi fi mydebug "tgtFoundL<${tgtFoundL[*]}>, argFoundL<${argFoundL[*]}>" declare -i testNum=0 ## do the tests for setup in "${setupL[@]}" ; do pYellow "setup=$setup" for feature in $(filter "$setup" "${tgtFoundL[@]}") ; do if (( testNum )) ; then sleep 3; fi case $feature:$setup in *thread*:perf-aggressive) continue;; tcl*fork:perf-release) continue;; atl*fork:perf-release) continue;; esac pOrange " > feature=$feature" outdir="$result_root_dir/$setup" mkdir -p "$outdir" outfile="$outdir/$feature.perf" if (( oldB )) ; then cat "$outfile" continue fi tmpdir="$NHI1_abs_top_builddir/performance" mkdir -p "$tmpdir" tmpfile="$tmpdir/temp.perf.$setup.$feature" truncate --size=0 "$tmpfile" CLIENT=("$inst_root_dir/$setup/inst/sbin/c/$NHI1_target-perfclient") ITP="undef" case $feature in py_*) ITP="$(runEval echo '$PYTHON')" ;; tcl_*) ITP="$(runEval echo '$TCLSH')" ;; atl_*) ITP="$(runEval echo '$ATLSH')" ;; jv_*) ITP="$(runEval echo '$JAVA')" ;; rb_*) ITP="$(runEval echo '$RUBY')" ;; cs_*) ITP="$(runEval echo '$CLREXEC')" ;; esac case $feature in jv_*) example="$inst_root_dir/$setup/inst/share/NHI1" ;; cs_*) example="$inst_root_dir/$setup/inst/share/NHI1" ;; *) example="$inst_root_dir/$setup/inst/sbin" ;; esac case $feature in brain_*) SERVER=( "$example/$NHI1_target-abrain") ;; py_*) SERVER=( "$ITP" "$example/py/$NHI1_target-perfserver.py") ;; rb_*) SERVER=( "$ITP" "$example/rb/$NHI1_target-perfserver.rb") ;; jv_*) SERVER=( "$ITP" "-jar" "$example/perfserver.jar") ;; cs_*) SERVER=( "$ITP" "$example/perfserver.exe") ;; pl_*) SERVER=( "$example/pl/$NHI1_target-perfserver.pl") ;; php_*) SERVER=( "$example/php/$NHI1_target-perfserver.php") ;; tcl_*) SERVER=( "$ITP" "$example/tcl/$NHI1_target-perfserver.tcl") ;; atl_*) SERVER=( "$ITP" "$example/atl/$NHI1_target-perfserver.atl") ;; go_*) SERVER=( "$example/go/$NHI1_target-perfserver.go") ;; cc_*) SERVER=( "$example/cc/$NHI1_target-perfserver") ;; c_*) SERVER=( "$example/c/$NHI1_target-perfserver") ;; *) echo "ERROR invalid server '$feature'" 1>&1 exit 1 ;; esac CG=() SG=() UDS_FILE="" case $feature in *_pipe) CG=("${prefixL[@]}") CL=("${CLIENT[@]}" --timeout 2 "${NUM[@]}" "${argFoundL[@]}" @ "${prefixServerL[@]}" "${SERVER[@]}") isServer=0 ;; *_uds_*) SG=("${prefixL[@]}") UDS_FILE="./socket.uds.$UDS" COM_ARGS=(--uds --file $UDS_FILE) (( UDS=UDS+1 )) || true isServer=1 ;; *_tcp_*) SG=("${prefixL[@]}") COM_ARGS=(--tcp --host localhost --port $PORT) (( PORT=PORT+1 )) || true isServer=1 ;; esac # SERVER -------------------------------------------------------------------- if (( isServer )) ; then case $feature in *_thread) START=thread;; *_spawn) START=spawn;; *_fork) START=fork;; esac SV=("${prefixServerL[@]}" "${SERVER[@]}" "${COM_ARGS[@]}" "--$START") CL=("${CLIENT[@]}" "--timeout" "2" "${NUM[@]}" "${argFoundL[@]}" "${COM_ARGS[@]}") if ((debugI)) ; then mydebug "${SG[*]} ${SV[*]}" else echo " > ${SV[*]}" | msg_filter | tee "$tmpfile" NHI1_silent=2 runInst "${SG[@]}" "${SV[@]}" 2>/dev/null 1>&2 & disown # shellcheck disable=SC2034 kill_pid=$? sleep 1 fi fi # CLIENT -------------------------------------------------------------------- if ((debugI)) ; then mydebug "${CG[*]} ${CL[*]}" continue fi echo " > ${CL[*]}" | msg_filter | tee -a "$tmpfile" NHI1_silent=2 runInst "${CG[@]}" "${CL[@]}" 2>&1 | tee -a "$tmpfile" | sed 's/^/ /' PID=$! # check end if grep --silent --ignore-case 'PerfClient.*PerfClientExec.*end' "$tmpfile" ; then if [[ " ${argFoundL[*]} " =~ [[:space:]]--all-performance[[:space:]] ]] ; then num="$(grep -c ^ "$tmpfile")" if (( num == 12 || num == 11 )) ; then echo " > mv '$tmpfile' '$outfile'" mv "$tmpfile" "$outfile" else myfail $"expect 12 lines in file '$tmpfile' but get:\n $(sed -e 's/^/ | /' "$tmpfile")" fi else rm "$tmpfile" fi else myfail "end missing in perf file" fi # cleanup server if (( isServer )) ; then killtree "$PID" || true if [[ -e "$UDS_FILE" ]] ; then rm "$UDS_FILE" fi fi (( testNum++ )) || true done done #$NHI1_abs_top_srcdir/performance/results.sh if (( ${#myfailed[@]} )) ; then pOrange $'\nWARNINGS:' args_join_by $'\n' "${myfailed[@]}" | sed -e 's/^/ > /' echo exit 1 else exit 0 fi