Skip to main content

How to fix grub loader when its not loading at the boot

When you installed or restore a Microsoft Windows after the Linux installation you won't see the grub loader is loading at the boot.


This is not a error of grub or Linux. But you couldn't load Linux at this situation.

There are few ways of fixing this issue. One way is using super grub disk.

Here I will tell you an another way of fixing this issue.

You just have to boot using any type of Linux CD/DVD

Now Open the terminal and type

sudo grub

and type the password to enter the kernel mode.

Now you have to find out few things.

First find the hd number of the hard disk.

This is normally zero since we are using one hard disk only.

But if we are using more of them we have to find out the hd number of the hard disk.

Then find the sd number of the partition which contains grub.

If you are not partitioned a separate /boot partition at the Linux installation this is same as the Linux partition.

If you are using Debian version you can start the application called Partition Editor and find out these things.

Here I will take hd number is x and sd number is y.

After typing sudo grub and enter the password you will be switched in to new prompt ( grub prompt ) like below

grub>

Now Enter

grub> root (hdx,y)

Now Restart the machine and you will find the grub loader is loading.
Enjoy!!!



Comments

Popular posts from this blog

Generate JWT access tokens from WSO2 Identity Server

In Identity Server 5.2.0 we have created an interface to generate access tokens. Using that we have developed a sample to generate JWT tokens. You can find that sample under msf4j samples[1][2]. If you are build it as it is you will need to use Java 8 to build since msf4j is developed on Java 8. So you will need to run Identity Server on Java 8 as well. After building the project[2] please copy the jar inside target directory to $IS_HOME/repository/components/dropins/ directory. And then please add the following configuration to Identity.xml which is placed under $IS_HOME/repository/conf/identity/ folder inside tag OAuth . <IdentityOAuthTokenGenerator>com.wso2.jwt.token.builder.JWTAccessTokenBuilder</IdentityOAuthTokenGenerator> Then go to the database you used to store oauth tokens (This is the database pointed from the datasource you mentioned in the $IS_HOME/repository/conf/identity/identity.xml) and then alter the size of the column ACCESS_TOKEN of the tab...

Setting up Single node Kubernetes Cluster with Core OS bare metal

You might know already there is an official documentation to follow to setup a Kubernetes cluster on Core OS bare metal. But when do that specially single node cluster, I found some gaps in that documentation [1] . And another reason for this blog post is to get everything into one place. So this blog post will describe how to overcome the issues of setting up a single node cluster. Installing Core OS bare metal. You can refer to doc [2]  to install core os.  First thing is about users. Documentation [2]  tells you how to create a user without password. To login as that user you will need ssh keys. So to create a user with username password, you can use a cloud-config.yaml file. Here is a sample. #cloud-config users: - name: user passwd: $6$SALT$3MUMz4cNIRjQ/Knnc3gXjJLV1vdwFs2nLvh//nGtEh/.li04NodZJSfnc4jeCVHd7kKHGnq5MsenN.tO6Z.Cj/ groups: - sudo - docker Here value for passwd is a hash value. One of the below methods can be used...

Consuming File System artifacts from Kubernetes Pods

When you are deploying an application which contains artifacts written on file system dynamically withing kubernetes (k8s), for example a tomcat server exposed to outside to deploy war files, you need to make sure the file system state is preserved always. Otherwise if the pod goes down, you might loose data. So one solution is to mount an external disk. Yes indeed you can do that. But how robust is that solution. Say something happened to the external disk. How can you recover the data? Use several disks and rsync to sync the data. Sounds a robust solution. Say you want to increase the reliability. And what happens if rsync process get killed. How much will it cost to make it's reliability closer to 100%? We have a robust, simple solution. It's using gluster to save data. [1] [2] We install a pod named gluster for each node. There is an additional disk attached to each node which will be used as the data storage for gluster. This disk is formatted in a special forma...