April 02, 2026

Very Large Scale Integration VLSI

1.1 Introduction to VLSI Very Large Scale Integration (VLSI) refers to the process of creating an integrated circuit (IC) by combining millions to billions of transistors onto a single chip. Modern chips (2nm process) contain more than 10 billion transistors. VLSI design is the methodology for designing such complex chips reliably and efficiently. Classification Transistors per Die Era Examples
1.1.1 Moore's Law Moore's Law (1965, Gordon Moore): The number of transistors on an integrated circuit doubles approximately every two years, while cost per transistor halves. Has driven the semiconductor industry for 60 years. • Practical implication: Performance doubles; price halves roughly every 18-24 months • Slowdown: Physical limits (atomic scale, quantum tunneling, heat) are slowing classical Moore's Law at sub-5nm nodes • Continuation strategies: 3D stacking (chiplets, HBM memory), new materials (GaN, SiGe), new transistor structures (FinFET, GAAFET, nanosheet) 1.2 CMOS Technology CMOS (Complementary Metal-Oxide-Semiconductor) is the dominant technology for digital ICs. It uses complementary pairs of PMOS and NMOS transistors to implement logic functions.
1.2.1 MOS Transistor Operation
1.2.2 CMOS Inverter – The Basic Gate
Where:  = activity factor (0-1), C_L = load capacitance, f = clock frequency • Propagation delay: t_pHL = 0.69 × R_n × C_L (fall); t_pLH = 0.69 × R_p × C_L (rise) • Speed-power product: Lower supply voltage reduces power (V²) but increases delay • Sizing: PMOS 2-3× wider than NMOS (for same R) because hole mobility  1/2-1/3 of electron mobility 1.3 VLSI Fabrication Process (CMOS)
1.4 VLSI Design Hierarchy • System level: Architecture design; specification in C/SystemC/SystemVerilog • Register Transfer Level (RTL): Describe data flow and operations in Verilog/VHDL • Logic level: Boolean equations; gate-level netlist after synthesis • Circuit level: Transistor-level schematic with sizing; SPICE simulation • Physical/Layout level: Geometric representation; polygons on layers; DRC/LVS • Process level: Manufacturing steps; process parameters; yield 1.4.1 Design Verification Steps • DRC (Design Rule Check): Verify geometries satisfy manufacturing constraints • LVS (Layout vs Schematic): Confirm extracted netlist matches schematic • Parasitic Extraction (RC): Extract resistances and capacitances from layout • STA (Static Timing Analysis): Verify all timing paths meet setup/hold constraints • Functional simulation: RTL-level; gate-level; post-layout simulation • Power analysis: IR drop, dynamic power, thermal analysis Simplified Design Rules 2.1 Purpose of Design Rules Design rules are a set of geometric constraints that define the minimum feature sizes and spacings allowed in an IC layout for a given process technology. They ensure reliable fabrication with acceptable yield. Design rules translate process limitations into geometric constraints on the layout.

2. Simplified Design Rules
2.1 Purpose of Design Rules
Design rules are a set of geometric constraints that define the minimum feature sizes and spacings allowed in an IC layout for a given process technology. They ensure reliable fabrication with acceptable yield. Design rules translate process limitations into geometric constraints on the layout.
 
2.1.1 Why Design Rules are Needed • Lithography limitations: Diffraction limits minimum printable feature; mask misalignment causes layer-to-layer offset • Etching variations: Dry/wet etch undercutting varies; spacing must accommodate worst-case • Implant straggle: Ion implant lateral spreading; junction edges must be properly spaced • CMP non-uniformity: Chemical-mechanical polishing removes material non-uniformly; density rules needed • Electromigration: High current density in metal causes atomic migration; minimum metal widths for current capacity • Yield optimization: Larger geometries have lower defect probability but waste area 2.2 Lambda () Design Rules – Mead-Conway Approach Lambda () design rules were introduced by Carver Mead and Lynn Conway (1980) as a technology-independent, scalable design methodology. All design rule dimensions are expressed in multiples of , where  = half the minimum gate length for a given process technology.

l = L_min / 2 (half the minimum feature size)

Example: For 90nm process: L_min = 90nm, l = 45nm. For 180nm process: L_min = 180nm, l = 90nm.




2.2.1 Special Design Rules • Antenna rules: Limit area of floating poly/metal during etch to prevent gate oxide damage from charge buildup; fix with diode protection or jumpers • Density rules (fill): Each layer must have 20-80% density per unit area for CMP uniformity; use dummy metal/poly fill • Latch-up rules: Guard rings (N+ tie in N-well, P+ tie in substrate) required near NMOS/PMOS to prevent parasitic SCR triggering • Electromigration rules: Metal current density < 1mA/m (M1) to prevent atomic migration failure; wider wires for power rails • Critical area analysis (CAA): Statistical yield estimation based on defect density and critical area per layer 


2.3 Process Layers in CMOS Layout
 
Static and Dynamic Logic, Multiphase Clocking

3.1  Static CMOS Logic

Static CMOS gates maintain their output state as long as power is applied, regardless of clock. They use complementary pull-up network (PUN) of PMOS transistors and pull-down network (PDN) of NMOS transistors.


3.1.1  Complementary CMOS Design Rules

Duality: PUN is the dual network of PDN. For every NMOS series connection, the corresponding PMOS is parallel (and vice versa)

NAND gate: PDN = NMOS series (A AND B must both be high); PUN = PMOS parallel

NOR gate: PDN = NMOS parallel (either A OR B high ® output low); PUN = PMOS series

Complex gates: NAND-NOR combinations in one stage (AOI, OAI gates) reduce transistor count

Transistor count: 2n transistors for n-input gate (n PMOS + n NMOS)

No static power: Never a DC path from VDD to GND ® zero static power (only leakage)

Full swing: Output always swings rail-to-rail (0 to VDD) ® maximum noise margin

 3.1.1  AOI and OAI Complex Gates



3.1.1  Ratioed Logic

Pseudo-NMOS: PMOS always-on load + NMOS PDN; faster but has static power; ratio must satisfy noise margins

DCVS (Differential Cascode Voltage Switch): Complementary outputs; self-timed; fast for XOR/adder functions

CVSL (Cascode Voltage Switch Logic): Uses cross-coupled PMOS loads; evaluates both Q and Q' simultaneously

Pass transistor logic (PTL): Use NMOS as transmission gate; simpler wiring but VT loss for NMOS passing "1"

Transmission gate (TG): NMOS + PMOS in parallel as bilateral switch; full swing transmission; uses 2T per switch

3.1  Dynamic CMOS Logic




July 15, 2025

Oracle 19c Installation guide (Linux/ Windows)

How to configure Oracle Listener Step-by-Step Guide

 Oracle Listener

  • A server-side process that listens for incoming client connection requests to Oracle databases.
  • Uses the listener.ora configuration file, typically located in:
    • $ORACLE_HOME/network/admin (Linux)
    • %ORACLE_HOME%\network\admin (Windows)

Step-by-step Listener Configuration

Create/Modify listener.ora
Configure tnsnames.ora
Start the listener (lsnrctl start)
Test connectivity using tnsping
Register services if needed (ALTER SYSTEM REGISTER)

Step1:Check if a listener is already running

lsnrctl status

  • If it says TNS-12541: TNS:no listener, no listener is running.
  • Otherwise, it will display current listener status.

Step2️: Create/Modify listener.ora

cd $ORACLE_HOME/network/admin

Open listener.ora in your editor:

vi listener.ora

Add or modify:

LISTENER =

  (DESCRIPTION_LIST =

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = TCP)(HOST = your-hostname-or-ip)(PORT = 1521))

      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

    )

  )

Explanation:

  • PROTOCOL = TCP: Network protocol.
  • HOST: Your server hostname or IP (localhost if local).
  • PORT = 1521: Default Oracle listener port.
  • PROTOCOL = IPC: For local (inter-process) connections, including external procedures.


Step3️:Create/Modify tnsnames.ora (Client/Server)

In the same folder, edit tnsnames.ora:

ORCL =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = your-hostname-or-ip)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = orcl)

    )

  )

  • ORCL is your connection alias.
  • SERVICE_NAME should match your database service name (select value from v$parameter where name='service_names';).


Step4️Start the Listener

lsnrctl start

Verify:

lsnrctl status

You should see your listener, hostname, and port.


Step5: Test Listener Connection

tnsping ORCL

Expected output:

scss ok (20 msec)

This confirms your listener is reachable.


Step6: Add Database to Listener (if not automatically registered)

In SQL*Plus as SYSDBA:

ALTER SYSTEM REGISTER;

This will dynamically register your database with the listener (Dynamic Service Registration).

Verify using:

lsnrctl services

You should see your database service listed under the listener.


Step7:  Stopping the Listener

To stop:

lsnrctl stop

To restart:

lsnrctl reload



 Troubleshooting Tips

Ensure port 1521 is open in your firewall.
Check listener.log in $ORACLE_HOME/network/log.
If TNS: no listener, ensure the listener is running.
Ensure your SERVICE_NAME in tnsnames.ora matches the database service name.
Use ALTER SYSTEM REGISTER if the service does not appear after starting.

Bottom of Form

 


June 05, 2025

Software Development Life Cycle

 

Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and test high quality softwares. The SDLC aims to produce a high-quality software that meets or exceeds customer expectations, reaches completion within times and cost estimates.

  • SDLC is the acronym of Software Development Life Cycle.
  • It is also called as Software Development Process.
  • SDLC is a framework defining tasks performed at each step in the software development process.
  • ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be the standard that defines all the tasks required for developing and maintaining software.

What is SDLC?

SDLC is a process followed for a software project, within a software organization. It consists of a detailed plan describing how to develop, maintain, replace and alter or enhance specific software. The life cycle defines a methodology for improving the quality of software and the overall development process.

The following figure is a graphical representation of the various stages of a typical SDLC.

A typical Software Development Life Cycle consists of the following stages −

Stage 1: Planning and Requirement Analysis

Requirement analysis is the most important and fundamental stage in SDLC. It is performed by the senior members of the team with inputs from the customer, the sales department, market surveys and domain experts in the industry. This information is then used to plan the basic project approach and to conduct product feasibility study in the economical, operational and technical areas.

Planning for the quality assurance requirements and identification of the risks associated with the project is also done in the planning stage. The outcome of the technical feasibility study is to define the various technical approaches that can be followed to implement the project successfully with minimum risks.

Stage 2: Defining Requirements

Once the requirement analysis is done the next step is to clearly define and document the product requirements and get them approved from the customer or the market analysts. This is done through an SRS (Software Requirement Specification) document which consists of all the product requirements to be designed and developed during the project life cycle.

Stage 3: Designing the Product Architecture

SRS is the reference for product architects to come out with the best architecture for the product to be developed. Based on the requirements specified in SRS, usually more than one design approach for the product architecture is proposed and documented in a DDS - Design Document Specification.

This DDS is reviewed by all the important stakeholders and based on various parameters as risk assessment, product robustness, design modularity, budget and time constraints, the best design approach is selected for the product.

A design approach clearly defines all the architectural modules of the product along with its communication and data flow representation with the external and third party modules (if any). The internal design of all the modules of the proposed architecture should be clearly defined with the minutest of the details in DDS.

Stage 4: Building or Developing the Product

In this stage of SDLC the actual development starts and the product is built. The programming code is generated as per DDS during this stage. If the design is performed in a detailed and organized manner, code generation can be accomplished without much hassle.

Developers must follow the coding guidelines defined by their organization and programming tools like compilers, interpreters, debuggers, etc. are used to generate the code. Different high level programming languages such as C, C++, Pascal, Java and PHP are used for coding. The programming language is chosen with respect to the type of software being developed.

Stage 5: Testing the Product

This stage is usually a subset of all the stages as in the modern SDLC models, the testing activities are mostly involved in all the stages of SDLC. However, this stage refers to the testing only stage of the product where product defects are reported, tracked, fixed and retested, until the product reaches the quality standards defined in the SRS.

Stage 6: Deployment in the Market and Maintenance

Once the product is tested and ready to be deployed it is released formally in the appropriate market. Sometimes product deployment happens in stages as per the business strategy of that organization. The product may first be released in a limited segment and tested in the real business environment (UAT- User acceptance testing).

Then based on the feedback, the product may be released as it is or with suggested enhancements in the targeting market segment. After the product is released in the market, its maintenance is done for the existing customer base.

SDLC Models

There are various software development life cycle models defined and designed which are followed during the software development process. These models are also referred as Software Development Process Models". Each process model follows a Series of steps unique to its type to ensure success in the process of software development.

Following are the most important and popular SDLC models followed in the industry &miuns;

  • Waterfall Model
  • Iterative Model
  • Spiral Model
  • V-Model
  • Big Bang Model

Other related methodologies are Agile Model, RAD Model, Rapid Application Development and Prototyping Models.

What is Software Testing?

Software testing is defined as an activity to check whether the actual results match the expected results and to ensure that the software system is Defect free. It involves execution of a software component or system component to evaluate one or more properties of interest.

Software testing also helps to identify errors, gaps or missing requirements in contrary to the actual requirements. It can be either done manually or using automated tools. Some prefer saying Software testing as a White Box and Black Box Testing.

In simple terms, Software Testing means Verification of Application Under Test (AUT).

Why is Software Testing Important?

Testing is important because software bugs could be expensive or even dangerous. Software bugs can potentially cause monetary and human loss, and history is full of such examples.

  • In April 2015, Bloomberg terminal in London crashed due to software glitch affected more than 300,000 traders on financial markets. It forced the government to postpone a 3bn pound debt sale.
  • Nissan cars have to recall over 1 million cars from the market due to software failure in the airbag sensory detectors. There has been reported two accident due to this software failure.
  • Starbucks was forced to close about 60 percent of stores in the U.S and Canada due to software failure in its POS system. At one point store served coffee for free as they unable to process the transaction.
  • Some of the Amazon’s third party retailers saw their product price is reduced to 1p due to a software glitch. They were left with heavy losses.
  • Vulnerability in Window 10. This bug enables users to escape from security sandboxes through a flaw in the win32k system.
  • In 2015 fighter plane F-35 fell victim to a software bug, making it unable to detect targets correctly.
  • China Airlines Airbus A300 crashed due to a software bug on April 26, 1994, killing 264 innocent live
  • In 1985, Canada's Therac-25 radiation therapy machine malfunctioned due to software bug and delivered lethal radiation doses to patients, leaving 3 people dead and critically injuring 3 others.
  • In April of 1999, a software bug caused the failure of a $1.2 billion military satellite launch, the costliest accident in history
  • In may of 1996, a software bug caused the bank accounts of 823 customers of a major U.S. bank to be credited with 920 million US dollars.

Types of Software Testing

Typically Testing is classified into three categories.

  • Functional Testing
  • Non-Functional Testing or Performance Testing
  • Maintenance (Regression and Maintenance)

 

 

 

User acceptance testing (UAT)

User acceptance testing (UAT) is the last phase of the software testing process. During UAT, actual software users test the software to make sure it can handle required tasks in real-world scenarios, according to specifications.

·         UAT is one of the final and most critical software project procedures that must occur before newly developed software is rolled out to the market.

·         UAT is also known as beta testing, application testing or end user testing.

UAT is important because it helps demonstrate that required business functions are operating in a manner suited to real-world circumstances and usage.

Acceptance tests are useful, because:

  • they capture user requirements in a directly verifiable way,
  • they identify problems which unit or integration tests might have missed,
  • and they provide an overview on how “done” the system is.

When looking at the process of software development, we can see that UAT is utilised to identify & verify client needs.



UNIT TESTING

UNIT TESTING is a level of software testing where individual units/ components of a software are tested. The purpose is to validate that each unit of the software performs as designed. A unit is the smallest testable part of any software. It usually has one or a few inputs and usually a single output. In procedural programming, a unit may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong to a base/ super class, abstract class or derived/ child class. (Some treat a module of an application as a unit. This is to be discouraged as there will probably be many individual units within that module.) Unit testing frameworks, drivers, stubs, and mock/ fake objects are used to assist in unit testing.



Definition by ISTQB

  • unit testing: See component testing.
  • component testing: The testing of individual software components.

Unit Testing Method

It is performed by using the White Box Testing method.

When is it performed?

Unit Testing is the first level of software testing and is performed prior to Integration Testing.

Who performs it?

It is normally performed by software developers themselves or their peers. In rare cases, it may also be performed by independent software testers.

Unit Testing Benefits

  • Unit testing increases confidence in changing/ maintaining code. If good unit tests are written and if they are run every time any code is changed, we will be able to promptly catch any defects introduced due to the change. Also, if codes are already made less interdependent to make unit testing possible, the unintended impact of changes to any code is less.
  • Codes are more reusable. In order to make unit testing possible, codes need to be modular. This means that codes are easier to reuse.
  • Development is faster. How? If you do not have unit testing in place, you write your code and perform that fuzzy ‘developer test’ (You set some breakpoints, fire up the GUI, provide a few inputs that hopefully hit your code and hope that you are all set.) But, if you have unit testing in place, you write the test, write the code and run the test. Writing tests takes time but the time is compensated by the less amount of time it takes to run the tests; You need not fire up the GUI and provide all those inputs. And, of course, unit tests are more reliable than ‘developer tests’. Development is faster in the long run too. How? The effort required to find and fix defects found during unit testing is very less in comparison to the effort required to fix defects found during system testing or acceptance testing.
  • The cost of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at higher levels. Compare the cost (time, effort, destruction, humiliation) of a defect detected during acceptance testing or when the software is live.
  • Debugging is easy. When a test fails, only the latest changes need to be debugged. With testing at higher levels, changes made over the span of several days/weeks/months need to be scanned.
  • Codes are more reliable. Why? I think there is no need to explain this to a sane person.

 

What is Regression Testing?

Regression Testing is defined as a type of software testing to confirm that a recent program or code change has not adversely affected existing features.

Regression Testing is nothing but a full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine.

This testing is done to make sure that new code changes should not have side effects on the existing functionalities. It ensures that the old code still works once the new code changes are done.

Need of Regression Testing

Regression Testing is required when there is a

  • Change in requirements and code is modified according to the requirement
  • New feature is added to the software
  • Defect fixing
  • Performance issue fix 

Software maintenance is an activity which includes enhancements, error corrections, optimization and deletion of existing features. These modifications may cause the system to work incorrectly. Therefore, Regression Testing becomes necessary.

Software Base Lining and Debugging

Baselining is the process of setting up the common, minimum requirements of an enterprise. This could be for a group of computers or all the computers in the network. When a new computer is added to the domain, the common minimum requirements are installed and applied automatically. This saves a lot of time and effort for the administrators.

Scenario

Assume, you are managing 500 computers using Desktop Central. All the computers should have some of the basic software applications like Adobe Reader, Microsoft Outlook, etc., Since you know the basic requirement, you can create a baseline for the required software applications and apply it to the required computers or across the network. This ensures that, whenever a new computer is added to the domain, the baseline gets applied by default.

Definition of 'Debugging'

 

Definition:

·         Debugging is the process of detecting and removing of existing and potential errors (also called as ‘bugs’) in a software code that can cause it to behave unexpectedly or crash.

·         To prevent incorrect operation of a software or system, debugging is used to find and resolve bugs or defects.

·         When various subsystems or modules are tightly coupled, debugging becomes harder as any change in one module may cause more bugs to appear in another. Sometimes it takes more time to debug a program than to code it.

 

Description:

To debug a program, user has to start with a problem, isolate the source code of the problem, and then fix it. A user of a program must know how to fix the problem as knowledge about problem analysis is expected. When the bug is fixed, then the software is ready to use. Debugging tools (called debuggers) are used to identify coding errors at various development stages. They are used to reproduce the conditions in which error has occurred, then examine the program state at that time and locate the cause. Programmers can trace the program execution step-by-step by evaluating the value of variables and stop the execution wherever required to get the value of variables or reset the program variables. Some programming language packages provide a debugger for checking the code for errors while it is being written at run time.

 

Data flow diagram (DFD )

https://www.lucidchart.com/pages/data-flow-diagram?a=0

A data flow diagram (DFD ) is a way of representing a flow of a data of a process or a system (usually an information system) The DFD also provides information about the outputs and inputs of each entity and the process itself. A data flow diagram has no control flow, there are no decision rules and no loops. Specific operations based on the data can be represented by a flowchart.

 

A data flow diagram can dive into progressively more detail by using levels and layers, zeroing in on a particular piece.  DFD levels are numbered 0, 1 or 2, and occasionally go to even Level 3 or beyond. The necessary level of detail depends on the scope of what you are trying to accomplish.

  • DFD Level 0 is also called a Context Diagram. It’s a basic overview of the whole system or process being analyzed or modeled. It’s designed to be an at-a-glance view, showing the system as a single high-level process, with its relationship to external entities. It should be easily understood by a wide audience, including stakeholders, business analysts, data analysts and developers. 

·         DFD Level 1 provides a more detailed breakout of pieces of the Context Level Diagram. You will highlight the main functions carried out by the system, as you break down the high-level process of the Context Diagram into its subprocesses. 

DFD in software engineering:

DFD in business analysis:

DFD in business process re-engineering

DFD in agile development:

DFD in system structures:

Context diagram

The first level of Data flow diagram, know as context diagram, describes the overview functionalities required by the external entities; it can be decomposed into a number of sub-level DFDs in hierarchical manner.

Context-Level Diagram

A context diagram gives an overview and it is the highest level in a data flow diagram, containing only one process representing the entire system. It should be split into major processes which give greater detail and each major process may further split to give more detail.

  • All external entities are shown on the context diagram as well as major data flow to and from them.
  • The diagram does not contain any data storage.
  • The single process in the context-level diagram, representing the entire system, can be exploded to include the major processes of the system in the next level diagram, which is termed as diagram 0.

·         What is an Entity Relationship Diagram (ERD)?

·         An entity relationship diagram (ERD) shows the relationships of entity sets stored in a database. An entity in this context is an object, a component of data. An entity set is a collection of similar entities. These entities can have attributes that define its properties.

·           

·         By defining the entities, their attributes, and showing the relationships between them, an ER diagram illustrates the logical structure of databases.

·         ER diagrams are used to sketch out the design of a database.

5 major components

Common Entity Relationship Diagram Symbols

An ER diagram is a means of visualizing how the information a system produces is related. There are five main components of an ERD:

  • Entities, which are represented by rectangles. An entity is an object or concept about which you want to store information. A weak entity is an entity that must defined by a foreign key relationship with another entity as it cannot be uniquely identified by its own attributes alone.

  • Actions, which are represented by diamond shapes, show how two entities share information in the database. In some cases, entities can be self-linked. For example, employees can supervise other employees.


  • Attributes, which are represented by ovals. A key attribute is the unique, distinguishing characteristic of the entity. For example, an employee's social security number might be the employee's key attribute.


    A multivalued attribute can have more than one value. For example, an employee entity can have multiple skill values.

    A derived attribute is based on another attribute. For example, an employee's monthly salary is based on the employee's annual salary.

  • Connecting lines, solid lines that connect attributes to show the relationships of entities in the diagram.
  • Cardinality specifies how many instances of an entity relate to one instance of another entity. Ordinality is also closely linked to cardinality. While cardinality specifies the occurrences of a relationship, ordinality describes the relationship as either mandatory or optional. In other words, cardinality specifies the maximum number of relationships and ordinality specifies the absolute minimum number of relationships.

Documentation

Documentation is a set of documents provided on paper, or online, or on digital or analog media, such as audio tape or CDs. Examples are user guides, white papers, on-line help, quick-reference guides. It is becoming less common to see paper (hard-copy) documentation. Documentation is distributed via websites, software products, and other on-line applications.

 

Report

A document that presents information in an organized format for a specific audience and purpose. Although summaries of reports may be delivered orally, complete reports are almost always in the form of written documents.

 

 

 

Recent Posts

Very Large Scale Integration VLSI

1.1 Introduction to VLSI Very Large Scale Integration (VLSI) refers to the process of creating an integrated circuit (IC) by combining mill...