You have to do some configurations as shown below :
Set dfs.support.append as true in hdfs-site.xml :
<property>
   <name>dfs.support.append</name>
   <value>true</value>
</property>
Stop all your daemon services using stop-all.sh and restart it again using start-all.sh
If you have a single-node cluster, you have to set the replication factor to 1.
You can use the following command line.
./hdfs dfs -setrep -R 1 filepath/directory
Or you can do the same at run time through java code:
fShell.setrepr((short) 1, filePath);  
Follow this code to append data into your File
public void createAppendHDFS() throws IOException {
    Configuration hadoopConfig = new Configuration();
    hadoopConfig.set("fs.defaultFS", hdfsuri);
    FileSystem fileSystem = FileSystem.get(hadoopConfig);
    String filePath = "/test/doc.txt";
    Path hdfsPath = new Path(filePath);
    fShell.setrepr((short) 1, filePath); 
    FSDataOutputStream fileOutputStream = null;
    try {
        if (fileSystem.exists(hdfsPath)) {
            fileOutputStream = fileSystem.append(hdfsPath);
            fileOutputStream.writeBytes("appending into file. \n");
        } else {
            fileOutputStream = fileSystem.create(hdfsPath);
            fileOutputStream.writeBytes("creating and writing into file\n");
        }
    } finally {
        if (fileSystem != null) {
            fileSystem.close();
        }
        if (fileOutputStream != null) {
            fileOutputStream.close();
        }
    }
}
I hope this helps.