The oc rsh command in OpenShift allows you to “step inside” a running container and interact with it as if you were using a regular command prompt. It’s like opening a door to the container and being able to run commands and access files inside it.
You need a tool called oc (OpenShift command-line tool) to use oc rsh. It helps you connect to an OpenShift cluster.
You find the container you want to access within a group of containers called a “pod.”
You use the oc rsh a command followed by the pod’s name to enter the container. It’s like opening the door to the container.
Once inside, you can run commands and navigate the container’s files as if you were using a regular command prompt. You can check logs, run scripts, and do other things that the container allows.
When you’re done, you exit the container by typing exit or pressing Ctrl+D. It’s like closing the door behind you.
Remember, it’s important to be careful when using oc rsh it because you can make changes that affect the container and the application running inside it.
OC delete
The oc delete command in OpenShift is used to delete various resources within an OpenShift cluster. It allows you to remove objects like pods, services, deployments, routes, and more.
A class file in Java has a .class extension. It contains bytecode, which is instruction for Java Virtual Machine, which translates that bytecode into platform-specific machine-level instruction based upon whether the Java program runs on Windows or Linux.
First Java Program.
public class Main {
public static void main(String[] args) {
System.out.println("Welcom to Java programming");
}
}
Scanner class is use to get input from the user in Java
It can parse primitive types and strings using regex.
Scanner input = new Scanner(System.in); //Create an instance of the scanner
System.out.print("Enter a double value: ");
Scanner input = new Scanner(System.in);
double d = input.nextDouble();
System.out.println("Your value is " + d);
Java if condition checks for the decision-making statement based on its true or false condition.
if(condition)
{
// Statements to execute if the given condition is true
}
---------------------------------
if (condition)
{
// Run this block if
condition is true
}
else
{
// Run this block if
condition is false
}
-----------------------------------------------------
if (condition)
statement;
else if (condition)
statement;
.
.
else
statement;
Switch
Execute the code based on the value of the switch expression value.
switch (expression)
{
case value1:
statement1;
break;
case value2:
statement2;
break;
default:
statementDefault;
}
from pyspark.sql.functions import * customer=spark.sql("Select * FROM workspace.customerdata.customer") display(customer) df.printSchema() df1=df.filter(df["customer_type"]=="VIP") df0=customer.filter(col("customer_type")=="Regular") display(df1) # filter by customer_type and country df1=customer.filter((customer.customer_type=="VIP") & (customer.country=='USA')) # where condition df2=customer.where((customer.customer_type=="VIP") & (customer.country=='USA')) # or operator df3=customer.where((customer.customer_type=="VIP") | (customer.country=='USA'))
Add new column withColumn Function
from pyspark.sql.functions import * customer=spark.sql("Select * FROM workspace.customerdata.customer") customer =customer.withColumn("Salary", col("age")* 1000) customer.printSchema() display(customer)