243

I want to test if a directory doesn't contain any files. If so, I will skip some processing.

I tried the following:

if [ ./* == "./*" ]; then
    echo "No new file"
    exit 1
fi

That gives the following error:

line 1: [: too many arguments

Is there a solution/alternative?

Anthony Kong
  • 5,318

27 Answers27

317
if [ -z "$( ls -A '/path/to/dir' )" ]; then
   echo "Empty"
else
   echo "Not Empty"
fi

ls -A means list all but not . or ..

Also, it would be best to check if the directory exists before.

48

No need for counting anything or shell globs. You can also use read in combination with find. If find's output is empty, you'll return false:

if find /some/dir -mindepth 1 -maxdepth 1 | read; then
   echo "dir not empty"
else
   echo "dir empty"
fi
slhck
  • 235,242
40
if [ -n "$(find "$DIR_TO_CHECK" -maxdepth 0 -type d -empty 2>/dev/null)" ]; then
    echo "Empty directory"
else
    echo "Not empty or NOT a directory"
fi
uzsolt
  • 1,305
30

With FIND(1) (under Linux and FreeBSD) you can look non-recursively at a directory entry via "-maxdepth 0" and test if it is empty with "-empty". Applied to the question this gives:

if test -n "$(find ./ -maxdepth 0 -empty)" ; then
    echo "No new file"
    exit 1
fi
TimJ
  • 409
25
#!/bin/bash
if [ -d /path/to/dir ]; then
    # the directory exists
    [ "$(ls -A /path/to/dir)" ] && echo "Not Empty" || echo "Empty"
else
    # You could check here if /path/to/dir is a file with [ -f /path/to/dir]
fi
Renaud
  • 484
13

What about testing if directory exists and not empty in one if statement

if [[ -d path/to/dir && -n "$(ls -A path/to/dir)" ]]; then 
  echo "directory exists and is not empty"
else
  echo "directory doesn't exist or is empty"
fi
aleb
  • 103
  • 3
stanieviv
  • 139
10

Use the following:

count="$( find /path -mindepth 1 -maxdepth 1 | wc -l )"
if [ $count -eq 0 ] ; then
   echo "No new file"
   exit 1
fi

This way, you're independent of the output format of ls. -mindepth skips the directory itself, -maxdepth prevents recursively defending into subdirectories to speed things up.

Daniel Beck
  • 111,893
9

A hacky, but bash-only, PID-free way:

is_empty() {
    test -e "$1/"* 2>/dev/null
    case $? in
        1)   return 0 ;;
        *)   return 1 ;;
    esac
}

This takes advantage of the fact that test builtin exits with 2 if given more than one argument after -e: First, "$1"/* glob is expanded by bash. This results in one argument per file. So

  • If there are no files, the asterisk in test -e "$1"* does not expand, so Shell falls back to trying file named *, which returns 1.

  • ...except if there actually is one file named exactly *, then the asterisk expands to well, asterisk, which ends up as the same call as above, ie. test -e "dir/*", just this time returns 0. (Thanks @TrueY for pointing this out.)

  • If there is one file, test -e "dir/file" is run, which returns 0.

  • But if there are more files than 1, test -e "dir/file1" "dir/file2" is run, which bash reports it as usage error, i.e. 2.

case wraps the whole logic around so that only the first case, with 1 exit status is reported as success.

Possible problems I haven't checked:

  • There are more files than number of allowed arguments--I guess this could behave similar to case with 2+ files.

  • Or there is actually file with an empty name--I'm not sure it's possible on any sane OS/FS.

Alois Mahdal
  • 2,344
8

This will do the job in the current working directory (.):

[ `ls -1A . | wc -l` -eq 0 ] && echo "Current dir is empty." || echo "Current dir has files (or hidden files) in it."

or the same command split on three lines just to be more readable:

[ `ls -1A . | wc -l` -eq 0 ] && \
echo "Current dir is empty." || \
echo "Current dir has files (or hidden files) in it."

Just replace ls -1A . | wc -l with ls -1A <target-directory> | wc -l if you need to run it on a different target folder.

Edit: I replaced -1a with -1A (see @Daniel comment)

ztank1013
  • 561
6

Using an array:

files=( * .* )
if (( ${#files[@]} == 2 )); then
    # contents of files array is (. ..)
    echo dir is empty
fi
glenn jackman
  • 27,524
3

There are lots of good answers here for simple cases, but this QA ranks very highly in a web search, and many of the answers have subtle failures that may be important for some use cases:

  • permissions problems (i.e. reporting empty dir when it's only unsearchable)
  • ambiguity about symlinks (test dereferences, find doesn't by default, ls may depend on whether the path ends with /)
  • answers using shell globbing may rely on options like nullglob that need to be tested or explicitly set

Here is a bash shell function that is efficient for a directory with lots of files, doesn't rely on shell options for globbing, and explicitly tests for permission problems:

mtdir() {
# Robust test for empty directory
# Usage: mtdir &lt;dir_name&gt;
# Return codes:
#   0 : empty
#   1 : non-empty
#   2 : evaluation not possible 

if [[ -d &quot;$1&quot; &amp;&amp; -r &quot;$1&quot; &amp;&amp; -x &quot;$1&quot; ]]
then
    if find -L -- &quot;$1&quot; -maxdepth 0 -type d -empty | grep -q .
    then
        # empty directory
        return 0
    else
        # non-empty directory
        return 1
    fi
else
    # consider printing an error
    echo &quot;argument $1 is not a directory, not readable, or not searchable by this user&quot;
    return 2
fi

}

The function can be used with:

if mtdir some/dir
then
    echo "directory is empty"
    # further processing
fi

You can also test for a non-empty directory explicitly, or an error, using the return code:

mtdir some/dir
ec=$?

if [[ $ec -eq 1 ]] then echo "non-empty directory" # further processing

elif [[ $ec -eq 2 ]] then echo "unexpected error: not a directory, not readable, or not searchable" # further processing fi

This function has been tested and works under Linux and macOS, with edge cases of non-directory files, symlinks, and directories with locked down permissions (i.e. chmod 0000 testdir).

As written, valid symlinks are dereferenced (followed), testing the targets rather than the links themselves (both at the -d test, and using find -L). If symbolic links should be tested as link files rather than their target for your application, an explicit test should be added before -d (e.g. ! -L "$1" && ...), and find -P should be used instead of find -L.

2

This solution is using only shell built-ins:

function is_empty() {
  typeset dir="${1:?Directory required as argument}"
  set -- ${dir}/*
  [ "${1}" == "${dir}/*" ];
}

is_empty /tmp/emmpty && echo "empty" || echo "not empty"
pez
  • 31
2

I think the best solution is:

files=$(shopt -s nullglob; shopt -s dotglob; echo /MYPATH/*)
[[ "$files" ]] || echo "dir empty" 

thanks to https://stackoverflow.com/a/91558/520567

This is an anonymous edit of my answer that might or might not be helpful to somebody: A slight alteration gives the number of files:

files=$(shopt -s nullglob dotglob; s=(MYPATH/*); echo ${s[*]}) 
echo "MYPATH contains $files files"

This will work correctly even if filenames contains spaces.

akostadinov
  • 1,530
1
if find "${DIR}" -prune ! -empty -exit 1; then
    echo Empty
else
    echo Not Empty
fi

EDIT: I think that this solution works fine with gnu find, after a quick look at the implementation. But this may not work with, for example, netbsd's find. Indeed, that one uses stat(2)'s st_size field. The manual describes it as:

st_size            The size of the file in bytes.  The meaning of the size
                   reported for a directory is file system dependent.
                   Some file systems (e.g. FFS) return the total size used
                   for the directory metadata, possibly including free
                   slots; others (notably ZFS) return the number of
                   entries in the directory.  Some may also return other
                   things or always report zero.

A better solution, also simpler, is:

if find "${DIR}" -mindepth 1 -exit 1; then
    echo Empty
else
    echo Not Empty
fi

Also, the -prune in the 1st solution is useless.

EDIT: no -exit for gnu find.. the solution above is good for NetBSD's find. For GNU find, this should work:

if [ -z "`find \"${DIR}\" -mindepth 1 -exec echo notempty \; -quit`" ]; then
    echo Empty
else
    echo Not Empty
fi
yarl
  • 208
1

The Question was:

if [ ./* == "./*" ]; then
    echo "No new file"
    exit 1
fi

Answer is:

if ls -1qA . | grep -q .
    then ! exit 1
    else : # Dir is empty
fi
HarriL
  • 19
  • 2
1
[ $(ls -A "$path" 2> /dev/null | wc -l) -eq 0 ] && echo "Is empty or not exists." || echo "Not is empty."
1

Although there are many reasonable solutions here, I am personally not a big fan of most of these answers, as many return a lot of output when returning directory contents. I was expecting a solution that would better handle directories with large numbers of files, while also one that I think is easy to understand.

So, this is what I ended up with, and thought I would share:

This appears to work OK for me on RedHat:

dir="/tmp/my_empty_dir"
[[ -d "${dir}" && -z "$(find "${dir}" -not -path "${dir}" -print -quit)" ]] && echo "${dir} is empty"

In this example:

First ensure dir exists -d "$dir", otherwise this will return empty (we will also see an error sent to stderr).

However it's likely you would need to test for this separately, as a "not empty" result is likely to mean "contains files" (which is not correct)

AND

Find: (find $dir -not -path $dir -print -quit):

  • Find everything in $dir
  • Exclude the directory $dir from the resulting output
  • Print the first result (something else within $dir)
  • Quit immediately (only return the first result).

BEWARE the -path parameter takes a "pattern", so if you are expecting special characters (eg: *, [, ]) these would need to be escaped Eg:

dir='/tmp/test[dir]'
dirpath='/tmp/test\[dir\]'
find "${dir}" -not -path "${dirpath}" -print -quit

During my test, this also successfully found hidden files. ($dir/.hidden)

Find returns 0 regardless of whether anything is found, and I don't currently see a simpler way to test this, so:

As per other examples I also wrapped this in:

Empty: [[ -z "$result" ]] to test if the result is blank.

NOT Empty: [[ ! -z "$result" ]] to test if the result is not blank.

Yes, the braces around ${dir} are not really required, but I thought it best to help handle this use case

dir="/tmp/"
[[ -d "${dir}subdir" ...
1

Without calling utils like ls, find, etc.:

Inside dir:

[ "$(echo *)x" != '*x' ] || [ "$(echo .[^.]*)x" != ".[^.]*x" ] || echo "empty dir"

The idea:

  • echo * lists non-dot files
  • echo .[^.]* lists dot files except of "." and ".."
  • if echo finds no matches, it returns the search expression, i.e. here * or .[^.]* - which both are no real strings and and have to be concatenated with e.g. a letter to coerce a string
  • || alternates the possibilities in a short circuit: there is at least one non-dot file or dir OR at least one dot file or dir OR the directory is empty - on execution level: "if first possibility fails, try next one, if this fails, try next one"; here technically Bash "tries to execute" echo "empty dir", put your action for empty dirs here (eg. exit).

Checked with symlinks, yet to check with more exotic possible file types.

1

Another find solution, (which doesn't rely on other tools), and should be pretty fast:

if (( "$(find /directory/to/check/ -mindepth 1 -printf '1\n' -quit)" )); then
  echo The directory is not empty
else
  echo The directory is empty
fi
smac89
  • 474
0

For any directory other than the current one, you can check if it's empty by trying to rmdir it, because rmdir is guaranteed to fail for non-empty directories. If rmdir succeeds, and you actually wanted the empty directory to survive the test, just mkdir it again.

Don't use this hack if there are other processes that might become discombobulated by a directory they know about briefly ceasing to exist.

If rmdir won't work for you, and you might be testing directories that could potentially contain large numbers of files, any solution relying on shell globbing could get slow and/or run into command line length limits. Probably better to use find in that case. The best find solution I can think of goes like

is_empty() {
    test -z "$(find "$@" -maxdepth 0 -not -empty 2>&1)"
}

This works for the GNU and BSD versions of find but not for the Solaris one, which lacks both -maxdepth and -empty options (-empty does what it says; -maxdepth 0 forces find to examine only files and directories supplied explicitly as arguments rather than descending recursively into any directories so supplied).

The slightly convoluted combination of test -z with -not and error message redirection makes is_empty useful for testing multiple directories at once: it will return true only if find emits no output and no errors, which will happen only when all the files it's asked to process exist and are empty regular files or directories (given no arguments at all, it tests the current directory).

If you want a test that returns true only when all its arguments identify empty directories and false when any either don't exist or are non-directories, you could use

is_empty_dir() {
    test -z "$(find "$@" -maxdepth 0 -not \( -empty -type d \) 2>&1)"
}
0

This work for me, to check & process files in directory ../IN, considering script is in ../Script directory:

FileTotalCount=0

    for file in ../IN/*; do
    FileTotalCount=`expr $FileTotalCount + 1`
done

if test "$file" = "../IN/*"
then

    echo "EXITING: NO files available for processing in ../IN directory. "
    exit

else

  echo "Starting Process: Found ""$FileTotalCount"" files in ../IN directory for processing."

# Rest of the Code
kenorb
  • 26,615
Arijit
  • 1
0

I made this approach:

CHECKEMPTYFOLDER=$(test -z "$(ls -A /path/to/dir)"; echo $?)
if [ $CHECKEMPTYFOLDER -eq 0 ]
then
  echo "Empty"
elif [ $CHECKEMPTYFOLDER -eq 1 ]
then
  echo "Not Empty"
else
  echo "Error"
fi
0

I might have missed an equivalent to this, which works on Unix

cd directory-concerned
ls * > /dev/null 2> /dev/null

return-code (test value of $?) will be 2 if nothing or 0 something found.

Note this ignores any '.' files and will probably return 2 if any of these exist without any other 'normal' filenames.

Anthony Kong
  • 5,318
Tim
  • 1
0

More solutions with find

# Tests that a directory is empty.
# Will print error message if not empty to stderr and set return
# val to non-zero (i.e. evaluates as false)
#
function is_empty() {
    find $1 -mindepth 1   -exec false {} + -fprintf /dev/stderr "%H is not empty\n" -quit
    # prints error when dir is not empty to stderr
    # -fprintf /dev/stderr "%H is not empty\n"
    #
    # -exec false {} +
    # sets the return value (i.e. $?) to indicate error
    #
    # --quit
    # terminate after the first match

}

examples

#!/bin/bash
set -eE # stop execution upon error

function is_empty() { find $1 -mindepth 1 -exec false {} + -fprintf /dev/stderr "%H is not empty\n" -quit }

trap 'echo FAILED' ERR #trap "echo DONE" EXIT

create a sandbox to play in

d=$(mktemp -d) f=$d/blah # this will be a potention file

set -v # turn on debugging

dir should be empty

is_empty $d

create a file in the dir

touch $f ! is_empty $d

this will cause the script to fail because the dir is not empty

is_empty $d

this line will not execute

echo "we should not get here"

output

[root@sysresccd ~/sandbox]# ./test

dir should be empty

is_empty $d

create a file in the dir

touch $f ! is_empty $d /tmp/tmp.aORTHb3Trv is not empty

this will cause the script to fail because the dir is not empty

is_empty $d /tmp/tmp.aORTHb3Trv is not empty echo FAILED FAILED

0

EDIT: Disregard the below, I was indeed mistaken.

The -s check will report true if a directory exists and has a size greater than zero, which is true for any directory. It does not reliably indicate whether the dir is empty.

Thanks to the commenters for pointing this out. I've left the answer and the comments as it may be useful for others to see that -s doesn't work for this.


I could be mistaken, but I think this check should be sufficient:

[[ -s /path/to/dir ]] && echo "Dir not empty" || echo "Dir empty"
Lou
  • 654
0

Accepted answer is good, although in bash I prefer shorter syntax:

if [[ `ls -A "$dir"` ]]; then
    echo Non empty
else
    echo Empty
fi

Also since I almost always set shopt -s nullglob in scripts, it seems more natural to me to use pure bash constructions, for example like this:

for nonEmpty in "$dir"/{,.}*; do break; done

if [[ $nonEmpty ]]; then echo Non empty else echo Empty fi

or this, if suitable:

for nonEmpty in "$dir"/{,.}*; do
    echo Non empty
    break
done

It's might be better since it does not use a subshell. Here "$dir"/{,.}* expands to "$dir"/* "$dir"/.* according to the Brace Expansion, see man bash

Alek
  • 125
0

This is all great stuff - just made it into a script so I can check for empty directories below the current one. The below should be put into a file called 'findempty', placed in the path somewhere so bash can find it and then chmod 755 to run. Can easily be amended to your specific needs I guess.

#!/bin/bash
if [ "$#" == "0" ]; then 
find . -maxdepth 1 -type d -exec findempty "{}"  \;
exit
fi

COUNT=`ls -1A "$*" | wc -l`
if [ "$COUNT" == "0" ]; then 
echo "$* : $COUNT"
fi