July 15, 2025

Oracle 19c Installation guide (Linux/ Windows)

How to configure Oracle Listener Step-by-Step Guide

 Oracle Listener

  • A server-side process that listens for incoming client connection requests to Oracle databases.
  • Uses the listener.ora configuration file, typically located in:
    • $ORACLE_HOME/network/admin (Linux)
    • %ORACLE_HOME%\network\admin (Windows)

Step-by-step Listener Configuration

Create/Modify listener.ora
Configure tnsnames.ora
Start the listener (lsnrctl start)
Test connectivity using tnsping
Register services if needed (ALTER SYSTEM REGISTER)

Step1:Check if a listener is already running

lsnrctl status

  • If it says TNS-12541: TNS:no listener, no listener is running.
  • Otherwise, it will display current listener status.

Step2️: Create/Modify listener.ora

cd $ORACLE_HOME/network/admin

Open listener.ora in your editor:

vi listener.ora

Add or modify:

LISTENER =

  (DESCRIPTION_LIST =

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = TCP)(HOST = your-hostname-or-ip)(PORT = 1521))

      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

    )

  )

Explanation:

  • PROTOCOL = TCP: Network protocol.
  • HOST: Your server hostname or IP (localhost if local).
  • PORT = 1521: Default Oracle listener port.
  • PROTOCOL = IPC: For local (inter-process) connections, including external procedures.


Step3️:Create/Modify tnsnames.ora (Client/Server)

In the same folder, edit tnsnames.ora:

ORCL =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = your-hostname-or-ip)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = orcl)

    )

  )

  • ORCL is your connection alias.
  • SERVICE_NAME should match your database service name (select value from v$parameter where name='service_names';).


Step4️Start the Listener

lsnrctl start

Verify:

lsnrctl status

You should see your listener, hostname, and port.


Step5: Test Listener Connection

tnsping ORCL

Expected output:

scss ok (20 msec)

This confirms your listener is reachable.


Step6: Add Database to Listener (if not automatically registered)

In SQL*Plus as SYSDBA:

ALTER SYSTEM REGISTER;

This will dynamically register your database with the listener (Dynamic Service Registration).

Verify using:

lsnrctl services

You should see your database service listed under the listener.


Step7:  Stopping the Listener

To stop:

lsnrctl stop

To restart:

lsnrctl reload



 Troubleshooting Tips

Ensure port 1521 is open in your firewall.
Check listener.log in $ORACLE_HOME/network/log.
If TNS: no listener, ensure the listener is running.
Ensure your SERVICE_NAME in tnsnames.ora matches the database service name.
Use ALTER SYSTEM REGISTER if the service does not appear after starting.

Bottom of Form

 


June 05, 2025

Software Development Life Cycle

 

Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and test high quality softwares. The SDLC aims to produce a high-quality software that meets or exceeds customer expectations, reaches completion within times and cost estimates.

  • SDLC is the acronym of Software Development Life Cycle.
  • It is also called as Software Development Process.
  • SDLC is a framework defining tasks performed at each step in the software development process.
  • ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be the standard that defines all the tasks required for developing and maintaining software.

What is SDLC?

SDLC is a process followed for a software project, within a software organization. It consists of a detailed plan describing how to develop, maintain, replace and alter or enhance specific software. The life cycle defines a methodology for improving the quality of software and the overall development process.

The following figure is a graphical representation of the various stages of a typical SDLC.

A typical Software Development Life Cycle consists of the following stages −

Stage 1: Planning and Requirement Analysis

Requirement analysis is the most important and fundamental stage in SDLC. It is performed by the senior members of the team with inputs from the customer, the sales department, market surveys and domain experts in the industry. This information is then used to plan the basic project approach and to conduct product feasibility study in the economical, operational and technical areas.

Planning for the quality assurance requirements and identification of the risks associated with the project is also done in the planning stage. The outcome of the technical feasibility study is to define the various technical approaches that can be followed to implement the project successfully with minimum risks.

Stage 2: Defining Requirements

Once the requirement analysis is done the next step is to clearly define and document the product requirements and get them approved from the customer or the market analysts. This is done through an SRS (Software Requirement Specification) document which consists of all the product requirements to be designed and developed during the project life cycle.

Stage 3: Designing the Product Architecture

SRS is the reference for product architects to come out with the best architecture for the product to be developed. Based on the requirements specified in SRS, usually more than one design approach for the product architecture is proposed and documented in a DDS - Design Document Specification.

This DDS is reviewed by all the important stakeholders and based on various parameters as risk assessment, product robustness, design modularity, budget and time constraints, the best design approach is selected for the product.

A design approach clearly defines all the architectural modules of the product along with its communication and data flow representation with the external and third party modules (if any). The internal design of all the modules of the proposed architecture should be clearly defined with the minutest of the details in DDS.

Stage 4: Building or Developing the Product

In this stage of SDLC the actual development starts and the product is built. The programming code is generated as per DDS during this stage. If the design is performed in a detailed and organized manner, code generation can be accomplished without much hassle.

Developers must follow the coding guidelines defined by their organization and programming tools like compilers, interpreters, debuggers, etc. are used to generate the code. Different high level programming languages such as C, C++, Pascal, Java and PHP are used for coding. The programming language is chosen with respect to the type of software being developed.

Stage 5: Testing the Product

This stage is usually a subset of all the stages as in the modern SDLC models, the testing activities are mostly involved in all the stages of SDLC. However, this stage refers to the testing only stage of the product where product defects are reported, tracked, fixed and retested, until the product reaches the quality standards defined in the SRS.

Stage 6: Deployment in the Market and Maintenance

Once the product is tested and ready to be deployed it is released formally in the appropriate market. Sometimes product deployment happens in stages as per the business strategy of that organization. The product may first be released in a limited segment and tested in the real business environment (UAT- User acceptance testing).

Then based on the feedback, the product may be released as it is or with suggested enhancements in the targeting market segment. After the product is released in the market, its maintenance is done for the existing customer base.

SDLC Models

There are various software development life cycle models defined and designed which are followed during the software development process. These models are also referred as Software Development Process Models". Each process model follows a Series of steps unique to its type to ensure success in the process of software development.

Following are the most important and popular SDLC models followed in the industry &miuns;

  • Waterfall Model
  • Iterative Model
  • Spiral Model
  • V-Model
  • Big Bang Model

Other related methodologies are Agile Model, RAD Model, Rapid Application Development and Prototyping Models.

What is Software Testing?

Software testing is defined as an activity to check whether the actual results match the expected results and to ensure that the software system is Defect free. It involves execution of a software component or system component to evaluate one or more properties of interest.

Software testing also helps to identify errors, gaps or missing requirements in contrary to the actual requirements. It can be either done manually or using automated tools. Some prefer saying Software testing as a White Box and Black Box Testing.

In simple terms, Software Testing means Verification of Application Under Test (AUT).

Why is Software Testing Important?

Testing is important because software bugs could be expensive or even dangerous. Software bugs can potentially cause monetary and human loss, and history is full of such examples.

  • In April 2015, Bloomberg terminal in London crashed due to software glitch affected more than 300,000 traders on financial markets. It forced the government to postpone a 3bn pound debt sale.
  • Nissan cars have to recall over 1 million cars from the market due to software failure in the airbag sensory detectors. There has been reported two accident due to this software failure.
  • Starbucks was forced to close about 60 percent of stores in the U.S and Canada due to software failure in its POS system. At one point store served coffee for free as they unable to process the transaction.
  • Some of the Amazon’s third party retailers saw their product price is reduced to 1p due to a software glitch. They were left with heavy losses.
  • Vulnerability in Window 10. This bug enables users to escape from security sandboxes through a flaw in the win32k system.
  • In 2015 fighter plane F-35 fell victim to a software bug, making it unable to detect targets correctly.
  • China Airlines Airbus A300 crashed due to a software bug on April 26, 1994, killing 264 innocent live
  • In 1985, Canada's Therac-25 radiation therapy machine malfunctioned due to software bug and delivered lethal radiation doses to patients, leaving 3 people dead and critically injuring 3 others.
  • In April of 1999, a software bug caused the failure of a $1.2 billion military satellite launch, the costliest accident in history
  • In may of 1996, a software bug caused the bank accounts of 823 customers of a major U.S. bank to be credited with 920 million US dollars.

Types of Software Testing

Typically Testing is classified into three categories.

  • Functional Testing
  • Non-Functional Testing or Performance Testing
  • Maintenance (Regression and Maintenance)

 

 

 

User acceptance testing (UAT)

User acceptance testing (UAT) is the last phase of the software testing process. During UAT, actual software users test the software to make sure it can handle required tasks in real-world scenarios, according to specifications.

·         UAT is one of the final and most critical software project procedures that must occur before newly developed software is rolled out to the market.

·         UAT is also known as beta testing, application testing or end user testing.

UAT is important because it helps demonstrate that required business functions are operating in a manner suited to real-world circumstances and usage.

Acceptance tests are useful, because:

  • they capture user requirements in a directly verifiable way,
  • they identify problems which unit or integration tests might have missed,
  • and they provide an overview on how “done” the system is.

When looking at the process of software development, we can see that UAT is utilised to identify & verify client needs.



UNIT TESTING

UNIT TESTING is a level of software testing where individual units/ components of a software are tested. The purpose is to validate that each unit of the software performs as designed. A unit is the smallest testable part of any software. It usually has one or a few inputs and usually a single output. In procedural programming, a unit may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong to a base/ super class, abstract class or derived/ child class. (Some treat a module of an application as a unit. This is to be discouraged as there will probably be many individual units within that module.) Unit testing frameworks, drivers, stubs, and mock/ fake objects are used to assist in unit testing.



Definition by ISTQB

  • unit testing: See component testing.
  • component testing: The testing of individual software components.

Unit Testing Method

It is performed by using the White Box Testing method.

When is it performed?

Unit Testing is the first level of software testing and is performed prior to Integration Testing.

Who performs it?

It is normally performed by software developers themselves or their peers. In rare cases, it may also be performed by independent software testers.

Unit Testing Benefits

  • Unit testing increases confidence in changing/ maintaining code. If good unit tests are written and if they are run every time any code is changed, we will be able to promptly catch any defects introduced due to the change. Also, if codes are already made less interdependent to make unit testing possible, the unintended impact of changes to any code is less.
  • Codes are more reusable. In order to make unit testing possible, codes need to be modular. This means that codes are easier to reuse.
  • Development is faster. How? If you do not have unit testing in place, you write your code and perform that fuzzy ‘developer test’ (You set some breakpoints, fire up the GUI, provide a few inputs that hopefully hit your code and hope that you are all set.) But, if you have unit testing in place, you write the test, write the code and run the test. Writing tests takes time but the time is compensated by the less amount of time it takes to run the tests; You need not fire up the GUI and provide all those inputs. And, of course, unit tests are more reliable than ‘developer tests’. Development is faster in the long run too. How? The effort required to find and fix defects found during unit testing is very less in comparison to the effort required to fix defects found during system testing or acceptance testing.
  • The cost of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at higher levels. Compare the cost (time, effort, destruction, humiliation) of a defect detected during acceptance testing or when the software is live.
  • Debugging is easy. When a test fails, only the latest changes need to be debugged. With testing at higher levels, changes made over the span of several days/weeks/months need to be scanned.
  • Codes are more reliable. Why? I think there is no need to explain this to a sane person.

 

What is Regression Testing?

Regression Testing is defined as a type of software testing to confirm that a recent program or code change has not adversely affected existing features.

Regression Testing is nothing but a full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine.

This testing is done to make sure that new code changes should not have side effects on the existing functionalities. It ensures that the old code still works once the new code changes are done.

Need of Regression Testing

Regression Testing is required when there is a

  • Change in requirements and code is modified according to the requirement
  • New feature is added to the software
  • Defect fixing
  • Performance issue fix 

Software maintenance is an activity which includes enhancements, error corrections, optimization and deletion of existing features. These modifications may cause the system to work incorrectly. Therefore, Regression Testing becomes necessary.

Software Base Lining and Debugging

Baselining is the process of setting up the common, minimum requirements of an enterprise. This could be for a group of computers or all the computers in the network. When a new computer is added to the domain, the common minimum requirements are installed and applied automatically. This saves a lot of time and effort for the administrators.

Scenario

Assume, you are managing 500 computers using Desktop Central. All the computers should have some of the basic software applications like Adobe Reader, Microsoft Outlook, etc., Since you know the basic requirement, you can create a baseline for the required software applications and apply it to the required computers or across the network. This ensures that, whenever a new computer is added to the domain, the baseline gets applied by default.

Definition of 'Debugging'

 

Definition:

·         Debugging is the process of detecting and removing of existing and potential errors (also called as ‘bugs’) in a software code that can cause it to behave unexpectedly or crash.

·         To prevent incorrect operation of a software or system, debugging is used to find and resolve bugs or defects.

·         When various subsystems or modules are tightly coupled, debugging becomes harder as any change in one module may cause more bugs to appear in another. Sometimes it takes more time to debug a program than to code it.

 

Description:

To debug a program, user has to start with a problem, isolate the source code of the problem, and then fix it. A user of a program must know how to fix the problem as knowledge about problem analysis is expected. When the bug is fixed, then the software is ready to use. Debugging tools (called debuggers) are used to identify coding errors at various development stages. They are used to reproduce the conditions in which error has occurred, then examine the program state at that time and locate the cause. Programmers can trace the program execution step-by-step by evaluating the value of variables and stop the execution wherever required to get the value of variables or reset the program variables. Some programming language packages provide a debugger for checking the code for errors while it is being written at run time.

 

Data flow diagram (DFD )

https://www.lucidchart.com/pages/data-flow-diagram?a=0

A data flow diagram (DFD ) is a way of representing a flow of a data of a process or a system (usually an information system) The DFD also provides information about the outputs and inputs of each entity and the process itself. A data flow diagram has no control flow, there are no decision rules and no loops. Specific operations based on the data can be represented by a flowchart.

 

A data flow diagram can dive into progressively more detail by using levels and layers, zeroing in on a particular piece.  DFD levels are numbered 0, 1 or 2, and occasionally go to even Level 3 or beyond. The necessary level of detail depends on the scope of what you are trying to accomplish.

  • DFD Level 0 is also called a Context Diagram. It’s a basic overview of the whole system or process being analyzed or modeled. It’s designed to be an at-a-glance view, showing the system as a single high-level process, with its relationship to external entities. It should be easily understood by a wide audience, including stakeholders, business analysts, data analysts and developers. 

·         DFD Level 1 provides a more detailed breakout of pieces of the Context Level Diagram. You will highlight the main functions carried out by the system, as you break down the high-level process of the Context Diagram into its subprocesses. 

DFD in software engineering:

DFD in business analysis:

DFD in business process re-engineering

DFD in agile development:

DFD in system structures:

Context diagram

The first level of Data flow diagram, know as context diagram, describes the overview functionalities required by the external entities; it can be decomposed into a number of sub-level DFDs in hierarchical manner.

Context-Level Diagram

A context diagram gives an overview and it is the highest level in a data flow diagram, containing only one process representing the entire system. It should be split into major processes which give greater detail and each major process may further split to give more detail.

  • All external entities are shown on the context diagram as well as major data flow to and from them.
  • The diagram does not contain any data storage.
  • The single process in the context-level diagram, representing the entire system, can be exploded to include the major processes of the system in the next level diagram, which is termed as diagram 0.

·         What is an Entity Relationship Diagram (ERD)?

·         An entity relationship diagram (ERD) shows the relationships of entity sets stored in a database. An entity in this context is an object, a component of data. An entity set is a collection of similar entities. These entities can have attributes that define its properties.

·           

·         By defining the entities, their attributes, and showing the relationships between them, an ER diagram illustrates the logical structure of databases.

·         ER diagrams are used to sketch out the design of a database.

5 major components

Common Entity Relationship Diagram Symbols

An ER diagram is a means of visualizing how the information a system produces is related. There are five main components of an ERD:

  • Entities, which are represented by rectangles. An entity is an object or concept about which you want to store information. A weak entity is an entity that must defined by a foreign key relationship with another entity as it cannot be uniquely identified by its own attributes alone.

  • Actions, which are represented by diamond shapes, show how two entities share information in the database. In some cases, entities can be self-linked. For example, employees can supervise other employees.


  • Attributes, which are represented by ovals. A key attribute is the unique, distinguishing characteristic of the entity. For example, an employee's social security number might be the employee's key attribute.


    A multivalued attribute can have more than one value. For example, an employee entity can have multiple skill values.

    A derived attribute is based on another attribute. For example, an employee's monthly salary is based on the employee's annual salary.

  • Connecting lines, solid lines that connect attributes to show the relationships of entities in the diagram.
  • Cardinality specifies how many instances of an entity relate to one instance of another entity. Ordinality is also closely linked to cardinality. While cardinality specifies the occurrences of a relationship, ordinality describes the relationship as either mandatory or optional. In other words, cardinality specifies the maximum number of relationships and ordinality specifies the absolute minimum number of relationships.

Documentation

Documentation is a set of documents provided on paper, or online, or on digital or analog media, such as audio tape or CDs. Examples are user guides, white papers, on-line help, quick-reference guides. It is becoming less common to see paper (hard-copy) documentation. Documentation is distributed via websites, software products, and other on-line applications.

 

Report

A document that presents information in an organized format for a specific audience and purpose. Although summaries of reports may be delivered orally, complete reports are almost always in the form of written documents.

 

 

 

April 11, 2025

Structured and object oriented programming

 

Abstract data type


  • A useful tool for specifying the logical properties of a data type is the abstract data type or ADT.

  • Fundamentally, a data type is a collection of values and set of operation on those values.

  • That collection and those operations form a mathematical construct that may be implemented using a particular hardware or software data structure.

  • The term “Abstract Data type” refers to the basic mathematical concept that defines the data type.

  • Formally, An abstract data type is a data declaration packaged together with the operations that are meaningful on the data type.

  • In other words, we can encapsulate the data and the operation on data and we hide them from user.

  • It is a mathematical model that contains set of values and function on thoses values without specifying the details of those functions.

Examples of ADT :

1. Linear ADTs:

Stack (last-in, first-out) ADT

Queue (first-in, first-out) ADT

a. Lists ADTs

Arrays

Linked List

Circular List

Doubly Linked List

2. Non-Linear ADTs

a. Trees

Binary Search Tree ADT

b. Heaps

c. Graphs

Undirected

Directed

d. Hash Tables

We can perform following basic operations on ADTs insert(),delete(), search(), findMin(),findMax(),findNext(), findPrevious(), enqueue(), dequeue() etc.


Computer Architecture

 

RISC/CISC architecture

  1. Comparison between RISC and CISC:

 

                  RISC

CISC

Acronym

It stands for ‘Reduced Instruction Set Computer’.

It stands for ‘Complex Instruction Set Computer’.

Definition

The RISC processors have a smaller set of instructions with few addressing nodes. 

The CISC processors have a larger set of instructions with many addressing nodes.

Memory unit

It has no memory unit and uses a separate hardware to implement instructions.

It has a memory unit to implement complex instructions.  

Program

It has a hard-wired unit of programming.

It has a micro-programming unit.

Design

It is a complex complier design.

It is an easy complier design.

Calculations

The calculations are faster and precise.

The calculations are slow and precise.

Decoding

Decoding of instructions is simple.

Decoding of instructions is complex.

Time

Execution time is very less.

Execution time is very high.

External memory

It does not require external memory for calculations.

It requires external memory for calculations.

Pipelining

Pipelining does function correctly.

Pipelining does not function correctly.

Stalling

Stalling is mostly reduced in processors.

The processors often stall.

Code expansion

Code expansion can be a problem.

Code expansion is not a problem.

Disc space

The space is saved.

The space is wasted.

Applications

Used in high end applications such as video processing, telecommunications and image processing.

Used in low end applications such as security systems, home automations, etc.

Difference between RISC and CISC

S.No.

RISC

CISC

1.

Simple instruction set

Complex instruction set

2.

Consists of Large number of registers.

Less number of registers

3.

Larger Program

Smaller program 

4.

Simple processor circuitry (small number of transistors)

Complex processor circuitry (more number of transistors)

5.

More RAM usage

Little Ram usage

6.

Simple addressing modes

Variety of addressing modes

7.

Fixed length instructions

Variable length instructions

8.

Fixed number of clock cycles for executing one instruction

Variable number of clock cycles for each instructions

Recent Posts

Oracle 19c Installation guide (Linux/ Windows)

Most Viewed Posts