Skip to main content

How to edit file /etc/fstab


This is how the my fstab file is looks like which is placed inside the etc folder in the Linux Folder structure.


You can overcome some problems by editing this file.




# /etc/fstab: static file system information.
#
# Use 'vol_id --uuid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
proc /proc proc defaults 0 0
# / was on /dev/sda8 during installation
UUID=64839f40-b2f1-412f-ae1a-c5a213ba449a / ext3 relatime,errors=remount-ro 0 1
# /home was on /dev/sda7 during installation
UUID=2ab422e9-b37d-4e7e-966f-6ca7d1d081cf /home ext3 relatime,errors=remount-ro 0 2
# /boot was on /dev/sda6 during installation
UUID=faaab6f9-539b-4f1a-82fc-b5a18887d28d /boot ext3 relatime,errors=remount-ro 0 3
# swap was on /dev/sda5 during installation
UUID=ab5f806d-8f4e-42b9-b67b-5618e9715585 none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0


Now we can study a line what it contains


# /home was on /dev/sda7 during installation

UUID=2ab422e9-b37d-4e7e-966f-6ca7d1d081cf /home ext3 relatime,errors=remount-ro 0 2

This line contains following parts



  1. # /home was on /dev/sda7 during installation
  2. UUID=2ab422e9-b37d-4e7e-966f-6ca7d1d081cf
  3. /home
  4. ext3
  5. relatime,errors=remount-ro 0 2

Now let me consider each one of these.

Actually I have separate home and boot partitions.

And when I installed linux for the second time I had to edit this file to use them as the actual home and boot partitions.



Part 1
This says that where the partition was in at the installation. In my case the partition is in sda7. It is said by the "dev/sda7"


Part 2
First of all launch the Partition Editor Software.

You can find the UUID of any of the partitions from right click the partition and select information at the Partition Editor Software.

Part 3
This part says where to mount the partition. Example is this sda7 partition has mounted as the home.


Part 4
This says about the file system about the partition. In this example case home partition contains the ext3 file format.


Part 5
I also don't know the exact meaning of this line. But it contains
options       dump  pass

options can have values of errors=remount or defaults
errors=remount is used only for the root partition
But if u want u can use that option for any partition
Next two numbers represent dump and pass
It can have following variations

0 0 for /proc and swap
0 1 for root
0 2 for others

You can see that last two lines is bit different because they are for swap partition and cdrom. Don't edit them if unless you really needed and u know what to do there.


Most probably you will need to restart twice after editing this to make it work. Don't get afraid if it doesn't work at the first time.





Comments

Popular posts from this blog

Generate JWT access tokens from WSO2 Identity Server

In Identity Server 5.2.0 we have created an interface to generate access tokens. Using that we have developed a sample to generate JWT tokens. You can find that sample under msf4j samples[1][2]. If you are build it as it is you will need to use Java 8 to build since msf4j is developed on Java 8. So you will need to run Identity Server on Java 8 as well. After building the project[2] please copy the jar inside target directory to $IS_HOME/repository/components/dropins/ directory. And then please add the following configuration to Identity.xml which is placed under $IS_HOME/repository/conf/identity/ folder inside tag OAuth . <IdentityOAuthTokenGenerator>com.wso2.jwt.token.builder.JWTAccessTokenBuilder</IdentityOAuthTokenGenerator> Then go to the database you used to store oauth tokens (This is the database pointed from the datasource you mentioned in the $IS_HOME/repository/conf/identity/identity.xml) and then alter the size of the column ACCESS_TOKEN of the tab...

Consuming File System artifacts from Kubernetes Pods

When you are deploying an application which contains artifacts written on file system dynamically withing kubernetes (k8s), for example a tomcat server exposed to outside to deploy war files, you need to make sure the file system state is preserved always. Otherwise if the pod goes down, you might loose data. So one solution is to mount an external disk. Yes indeed you can do that. But how robust is that solution. Say something happened to the external disk. How can you recover the data? Use several disks and rsync to sync the data. Sounds a robust solution. Say you want to increase the reliability. And what happens if rsync process get killed. How much will it cost to make it's reliability closer to 100%? We have a robust, simple solution. It's using gluster to save data. [1] [2] We install a pod named gluster for each node. There is an additional disk attached to each node which will be used as the data storage for gluster. This disk is formatted in a special forma...

Integrate New Relic with WSO2 API Manager

In WSO2 API Manager, we have two transports. HTTP servlet transport and Passthru / NIO transport. All the web application requests are handled through HTTP servlet transport which is on 9763 port and 9443 port with ssl and here we are using tomcat inside WSO2 products. All the service requests are served via Passthru / NIO transport which is on 8082 and 8243 with ssl. When we integrate API Manager with new relic in the way discussed in blog posts [5],[6], new relic only detects the calls made to tomcat transports. So we couldn’t get the API calls related data OOTB. But by further analyzing new relic APIs I managed to find a workaround for this problem. New relic supports publishing custom events via their insights api[1]. So what we can do is publish these data via custom API handler[2]. Following is a sample implementation of a handler that I used to test the scenario. I will attach the full project herewith[7]. I have created an osgi bundle with this implementation so after building ...