0% found this document useful (0 votes)
95 views32 pages

C - Fakepathall Lectures of Chemical Engineering Lab 1

Uploaded by

g9sgd2bgsw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views32 pages

C - Fakepathall Lectures of Chemical Engineering Lab 1

Uploaded by

g9sgd2bgsw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

LECTURE 1

Introduction to subject

Chemical engineers use chemical formulas and discoveries made in laboratories


and apply them to much larger scale for commercial use. They are often tasked to
accomplish this with safety, cost-efficiency and profit in mind. While entry-level
positions require only a bachelor's degree in chemical engineering, advanced
training options including certificate programs are available and can increase
hiring and income potential. Entry-level chemical engineering positions require a
bachelor's degree. Some schools have additional specializations and certificate
programs that last 2-4 years. A strong mathematical background in calculus,
trigonometry, algebra, and geometry--along with a thorough understanding of
science topics such as physics, chemistry and biology--is necessary to enter a
chemical engineering program.
A master's degree is ideal for advanced or managerial vocations in chemical
engineering. Chemical engineers interested in performing independent or scholarly
research usually have doctorate degrees. At all education levels, chemical
engineers have to strike a balance between independent research, laboratory
assignments and classroom lectures. Some schools combine classroom instruction
with practical work assignments, allowing chemical engineers to gain beneficial
work experience before heading into the field. Chemical engineers rarely need to be
licensed, but it can lead to enhanced job prospects. To obtain licensure, candidates must complete
four years of work experience, pass a state examination and hold a degree from an accredited
engineering program.
Individuals can take the first examination, the Fundamentals of Engineering, after graduation from an
approved postsecondary degree program. Once work experience has been acquired as an engineer
intern or engineer in training, a person can sit for the Principles and Practice of Engineering
examination to obtain state licensure. Continuing education is necessary in many states to maintain
and renew this license.

Chemical Engineer Career Information


Job Duties
An employer assigns chemical engineers to specific work projects. The nature of the project
determines the work duties performed by the chemical engineer. After learning and understanding
the purpose of a project, a chemical engineer begins to perform preliminary theoretical tests. Once a
solid plan is created for manufacturing, a chemical engineer begins the production of test samples.
These test samples are carefully assigned to subjects. The results of these tests are monitored and
adjustments are made to formulas as needed. When the product has been fully tested, it is sent
away for approval and the chemical engineer moves on to another project.

Work Environment
Chemical engineers are employed in many industries other than chemical manufacturing. Some are
employed in biotechnology, business services, healthcare, food manufacturing, electronics and
energy manufacturing. Due to the waste materials and other dangers that come along with chemical
manufacturing, chemical engineers exercise extreme caution to ensure the safety of customers and
other employees.

Advancement Options
After working through the training and internship level, a chemical engineer works independently or
in small groups with other chemical engineers. As more experience is acquired, a chemical engineer
is likely to be promoted to a position of supervision within the company. More assignments are given
to experienced engineers, giving them more responsibility and better pay. Chemical engineers that
dislike managerial roles might find themselves moving to a sales or teaching role in chemical
engineering.

In two words, chemical engineering is applied chemistry. It is the branch of


engineering concerned with the design, construction and operation of machines and
plants that perform chemical reactions to solve practical problems or make useful
products. It applies the physical sciences (chemistry and physics) and/or life sciences
(biology, microbiology and biochemistry) together with mathematics and economics to
convert raw materials or chemicals into more useful and valuable forms. Chemical
engineers might work in mineral-based industries, petrochemical plants, synthetic fiber
units, petroleum refining plants, chemical industries or refineries. They might also work
in pharmaceutical companies, paint manufacturing, the fertilizer industry, textiles sector,
plastics or explosives.

Research organizations, laboratories, defence establishments, atomic power plants and


forensic investigation departments employ experts in chemical engineering. Biochemical
engineering is one common branch of chemical engineering. Other areas that might
attract engineering students are nanotechnology and environmental engineering.
Among conventional chemical engineering streams, some common areas are ceramics;
fertilizers and pesticides; chemical processes; plastics and polymers; electro-chemical
processes and molecular chemistry-based fields.

The main topics covered in the course curricula of chemical engineering include
thermodynamics, material science and engineering, biochemical engineering, safety
engineering, environmental engineering, process instrumentation, petrol refinery
engineering, high polymer engineering and fluid mechanics. These are covered in the
syllabus of most major universities. Electives might include water treatment technology,
computational fluid dynamics, food technology, surface coating, ceramic technology,
rubber technology, bio-nanotechnology or industrial pollution control.

A syllabus for chemical engineering usually involves all common engineering subjects in
the first year. From the second year onwards, specialized courses could include
reaction engineering, chemical thermodynamics, surface science, chemical kinetics,
fluid mechanics and catalysis, according to All About Education. Until their final year,
students generally read about chemical processes and properties and cover topics such
as chemical reactors and chemical thermodynamics. They might also begin
experimenting with technology and devices. By the end of their studies, students will
likely be exposed to pharmaceuticals, petroleum, polymers and chemicals, as well as
computer and information technology as it relates to chemical engineering processes.
Chemical engineering graduates might not need to pursue postgraduate education for
employment. Higher studies are an option, but chemical engineering graduates will
need to compete with a large pool of graduates for courses such as Master of Science
(M.Sc.) in Chemistry, M.Sc. in Biology, M.Sc. in Pharmaceutical Chemistry or Master of
Pharmacy (M.Pharm.). Chemistry, biology and pharmacy graduates can all apply for the
same field of work and study as chemical engineering graduates.
Some colleges that offer programs such as a Bachelor in Chemical Engineering or
Bachelor of Technology (B.Tech.) in Chemical Engineering are given below.

In changing times, the keepers of engineering curricula must look to the most
responsive academic elements to address new needs. We believe that the future of
chemical engineering lies not only in biotechnology, but also in many other arenas,
including electronic and photonic materials and devices. As the time needed to
create a new lab experiment is mere months, whereas creation of new a text takes
years, it follows that in times of change, our laboratories should be leading, rather
than following, curricular changes. We report here the installation of six
experiments which, taken together, substantially enlarge the range of experiments
in our undergraduate ‘unit operations” and “transport phenomena” laboratories.
The chemical engineering undergraduate laboratory has traditionally existed to
satisfy either or both of the following objectives: (1) illustrate individual unit
operations (e.g.,extraction1 , mixing2 , chromatography3 , adsorption 4,5,
electrochemical deposition6 , fluidization7 )

LECTURE 2
THERMODYNAMICS
Thermodynamics is the branch of physics that deals with heat and
temperature, and their relation to energy, work, radiation, and properties of matter.
The behavior of these quantities is governed by the four laws of thermodynamics
which convey a quantitative description using measurable macroscopic physical
quantities, but may be explained in terms of microscopic constituents by statistical
mechanics. Thermodynamics applies to a wide variety of topics in science and
engineering, especially physical chemistry, chemical engineering and mechanical
engineering, but also in fields as complex as meteorology. Historically,
thermodynamics developed out of a desire to increase the efficiency of early steam
engines, particularly through the work of French physicist Nicolas Léonard Sadi
Carnot (1824) who believed that engine efficiency was the key that could help
France win the Napoleonic Wars.[1] Scots-Irish physicist Lord Kelvin was the first
to formulate a concise definition of thermodynamics in 1854[2] which stated,
"Thermo-dynamics is the subject of the relation of heat to forces acting between
contiguous parts of bodies, and the relation of heat to electrical agency.

" The initial application of thermodynamics to mechanical heat engines was


quickly extended to the study of chemical compounds and chemical reactions.
Chemical thermodynamics studies the nature of the role of entropy in the process
of chemical reactions and has provided the bulk of expansion and knowledge of the
field.[3][4][5][6][7][8][9][10][11] Other formulations of thermodynamics emerged.
Statistical thermodynamics, or statistical mechanics, concerns itself with statistical
predictions of the collective motion of particles from their microscopic behavior.
In 1909, Constantin Carathéodory presented a purely mathematical approach in an
axiomatic formulation, a description often referred to as geometrical
thermodynamics. A description of any thermodynamic system employs the four
laws of thermodynamics that form an axiomatic basis. The first law specifies that
energy can be exchanged between physical systems as heat and work.[12] The
second law defines the existence of a quantity called entropy, that describes the
direction, thermodynamically, that a system can evolve and quantifies the state of
order of a system and that can be used to quantify the useful work that can be
extracted from the system.[13]In thermodynamics, interactions between large
ensembles of objects are studied and categorized. Central to this are the concepts
of the thermodynamic system and its surroundings. A system is composed of
particles, whose average motions define its properties, and those properties are in
turn related to one another through equations of state. Properties can be combined
to express internal energy and thermodynamic potentials, which are useful for
determining conditions for equilibrium and spontaneous processes. With these
tools, thermodynamics can be used to describe how systems respond to changes in
their environment. This can be applied to a wide variety of topics in science and
engineering, such as engines, phase transitions, chemical reactions, transport
phenomena, and even black holes. The results of thermodynamics are essential for
other fields of physics and for chemistry, chemical engineering, corrosion
engineering, aerospace engineering, mechanical engineering, cell biology,
biomedical engineering, materials science, and economics, to name a few.[14][15]
This article is focused mainly on classical thermodynamics which primarily studies
systems in thermodynamic equilibrium. Non-equilibrium thermodynamics is often
treated as an extension of the classical treatment, but statistical mechanics has
brought many advances to that field.

The First Law of Thermodynamics

The first law of thermodynamics, also known as Law of Conservation of


Energy, states that energy can neither be created nor destroyed; energy can only be
transferred or changed from one form to another. For example, turning on a light
would seem to produce energy; however, it is electrical energy that is converted.A
way of expressing the first law of thermodynamics is that any change in the
internal energy (∆E) of a system is given by the sum of the heat (q) that flows
across its boundaries and the work (w) done on the system by the surroundings:

[latex]\Delta E = q + w[/latex]
The First Law of Thermodynamics states that energy can be
converted from one form to another with the interaction of heat,
work and internal energy, but it cannot be created nor
destroyed, under any circumstances. Mathematically, this is
represented as

ΔU=q+w(1)(1)ΔU=q+w

with

 ΔUΔU is the total change in internal energy of a system,


 qq is the heat exchanged between a system and its
surroundings, and
 ww is the work done by or on the system.

This law says that there are two kinds of processes, heat and work, that can lead to
a change in the internal energy of a system. Since both heat and work can be
measured and quantified, this is the same as saying that any change in the energy
of a system must result in a corresponding change in the energy of the
surroundings outside the system. In other words, energy cannot be created or
destroyed. If heat flows into a system or the surroundings do work on it, the
internal energy increases and the sign of q and w are positive. Conversely, heat
flow out of the system or work done by the system (on the surroundings) will be at
the expense of the internal energy, and q and w will therefore be negative.

The Second Law of Thermodynamics

The second law of thermodynamics says that the entropy of any isolated
system always increases. Isolated systems spontaneously evolve towards thermal
equilibrium—the state of maximum entropy of the system. More simply put: the
entropy of the universe (the ultimate isolated system) only increases and never
decreases.A simple way to think of the second law of thermodynamics is that a
room, if not cleaned and tidied, will invariably become more messy and disorderly
with time – regardless of how careful one is to keep it clean. When the room is
cleaned, its entropy decreases, but the effort to clean it has resulted in an increase
in entropy outside the room that exceeds the entropy lost.

The Third Law of Thermodynamics

The third law of thermodynamics states that the entropy of a system


approaches a constant value as the temperature approaches absolute zero. The
entropy of a system at absolute zero is typically zero, and in all cases is determined
only by the number of different ground states it has. Specifically, the entropy of a
pure crystalline substance (perfect order) at absolute zero temperature is zero. This
statement holds true if the perfect crystal has only one state with minimum energy.

LECTURE 3
Nanotechnology
Nanotechnology ("nanotech") is manipulation of matter on an atomic,
molecular, and supramolecular scale. The earliest, widespread description of
nanotechnology referred to the particular technological goal of precisely
manipulating atoms and molecules for fabrication of macroscale products, also
now referred to as molecular nanotechnology. A more generalized description of
nanotechnology was subsequently established by the National Nanotechnology
Initiative, which defines nanotechnology as the manipulation of matter with at least
one dimension sized from 1 to 100 nanometers. This definition reflects the fact that
quantum mechanical effects are important at this quantum-realm scale, and so the
definition shifted from a particular technological goal to a research category
inclusive of all types of research and technologies that deal with the special
properties of matter which occur below the given size threshold. It is therefore
common to see the plural form "nanotechnologies" as well as "nanoscale
technologies" to refer to the broad range of research and applications whose
common trait is size. Nanotechnology as defined by size is naturally very broad,
including fields of science as diverse as surface science, organic chemistry,
molecular biology, semiconductor physics, energy storage microfabrication,
molecular engineering, etc. The associated research and applications are equally
diverse, ranging from extensions of conventional device physics to completely new
approaches based upon molecular self-assembly, from developing new materials
with dimensions on the nanoscale to direct control of matter on the atomic scale.
Scientists currently debate the future implications of nanotechnology.
Nanotechnology may be able to create many new materials and devices with a vast
range of applications, such as in nanomedicine, nanoelectronics, biomaterials
energy production, and consumer products. On the other hand, nanotechnology
raises many of the same issues as any new technology, including concerns about
the toxicity and environmental impact of nanomaterials, and their potential effects
on global economics, as well as speculation about various doomsday scenarios.
These concerns have led to a debate among advocacy groups and governments on
whether special regulation of nanotechnology is warranted. Nanotechnology is the
engineering of functional systems at the molecular scale. This covers both current
work and concepts that are more advanced. In its original sense, nanotechnology
refers to the projected ability to construct items from the bottom up, using
techniques and tools being developed today to make complete, high performance
products.

One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison,


typical carbon-carbon bond lengths, or the spacing between these atoms in a
molecule, are in the range 0.12–0.15 nm, and a DNA double-helix has a diameter
around 2 nm. On the other hand, the smallest cellular life-forms, the bacteria of the
genus Mycoplasma, are around 200 nm in length. By convention, nanotechnology
is taken as the scale range 1 to 100 nm following the definition used by the
National Nanotechnology Initiative in the US. The lower limit is set by the size of
atoms (hydrogen has the smallest atoms, which are approximately a quarter of a
nm kinetic diameter) since nanotechnology must build its devices from atoms and
molecules. The upper limit is more or less arbitrary but is around the size below
which phenomena not observed in larger structures start to become apparent and
can be made use of in the nano device. These new phenomena make
nanotechnology distinct from devices which are merely miniaturised versions of an
equivalent macroscopic device; such devices are on a larger scale and come under
the description of microtechnology.

To put that scale in another context, the comparative size of a nanometer to a


meter is the same as that of a marble to the size of the earth. Or another way of
putting it: a nanometer is the amount an average man's beard grows in the time it
takes him to raise the razor to his face.

Molecular nanotechnology, sometimes called molecular manufacturing,


describes engineered nanosystems (nanoscale machines) operating on the
molecular scale. Molecular nanotechnology is especially associated with the
molecular assembler, a machine that can produce a desired structure or device
atom-by-atom using the principles of mechanosynthesis. Manufacturing in the
context of productive nanosystems is not related to, and should be clearly
distinguished from, the conventional technologies used to manufacture
nanomaterials such as carbon nanotubes and nanoparticles.

When the term "nanotechnology" was independently coined and popularized


by Eric Drexler (who at the time was unaware of an earlier usage by Norio
Taniguchi) it referred to a future manufacturing technology based on molecular
machine systems. The premise was that molecular scale biological analogies of
traditional machine components demonstrated molecular machines were possible:
by the countless examples found in biology, it is known that sophisticated,
stochastically optimised biological machines can be produced.

It is hoped that developments in nanotechnology will make possible their


construction by some other means, perhaps using biomimetic principles. However,
Drexler and other researchers[38] have proposed that advanced nanotechnology,
although perhaps initially implemented by biomimetic means, ultimately could be
based on mechanical engineering principles, namely, a manufacturing technology
based on the mechanical functionality of these components (such as gears,
bearings, motors, and structural members) that would enable programmable,
positional assembly to atomic specification.[39] The physics and engineering
performance of exemplar designs were analyzed in Drexler's book Nanosystems.

In general it is very difficult to assemble devices on the atomic scale, as one


has to position atoms on other atoms of comparable size and stickiness. Another
view, put forth by Carlo Montemagno,[40] is that future nanosystems will be
hybrids of silicon technology and biological molecular machines. Richard Smalley
argued that mechanosynthesis are impossible due to the difficulties in
mechanically manipulating individual molecules.

This led to an exchange of letters in the ACS publication Chemical &


Engineering News in 2003.[41] Though biology clearly demonstrates that
molecular machine systems are possible, non-biological molecular machines are
today only in their infancy. Leaders in research on non-biological molecular
machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories
and UC Berkeley.[1] They have constructed at least three distinct molecular
devices whose motion is controlled from the desktop with changing voltage: a
nanotube nanomotor, a molecular actuator,[42] and a nanoelectromechanical
relaxation oscillator.[43] See nanotube nanomotor for more examples.An
experiment indicating that positional molecular assembly is possible was
performed by Ho and Lee at Cornell University in 1999. They used a scanning
tunneling microscope to move an individual carbon monoxide molecule (CO) to an
individual iron atom (Fe) sitting on a flat silver crystal, and chemically bound the
CO to the Fe by applying a voltage.
Two main approaches are used in nanotechnology. In the "bottom-up"
approach, materials and devices are built from molecular components which
assemble themselves chemically by principles of molecular recognition.[35] In the
"top-down" approach, nano-objects are constructed from larger entities without
atomic-level control.[36]

Areas of physics such as nanoelectronics, nanomechanics, nanophotonics and


nanoionics have evolved during the last few decades to provide a basic scientific
foundation of nanotechnology. The nanomaterials field includes subfields which
develop or study materials having unique properties arising from their nanoscale
dimensions.[46]

 Interface and colloid science has given rise to many materials which may be
useful in nanotechnology, such as carbon nanotubes and other fullerenes, and
various nanoparticles and nanorods. Nanomaterials with fast ion transport are
related also to nanoionics and nanoelectronics.
 Nanoscale materials can also be used for bulk applications; most present
commercial applications of nanotechnology are of this flavor.
 Progress has been made in using these materials for medical applications;
see Nanomedicine.
 Nanoscale materials such as nanopillars are sometimes used in solar cells which
combats the cost of traditional silicon solar cells.
 Development of applications incorporating semiconductor nanoparticles to be
used in the next generation of products, such as display technology, lighting,
solar cells and biological imaging; see quantum dots.
 Recent application of nanomaterials include a range of biomedical applications,
such as tissue engineering, drug delivery, and biosensors.
LECTURE 4

FLOW MEASUREMENT

Flow measurement is a technique used in any process requiring the transport of a material
from one point to another (for example, bulk supply of oil from a road tanker to a garage holding
tank). It can be used for quantifying a charge for material supplied or maintaining and
controlling a specific rate of flow. In many processes, plant efficiency depends on being able to
measure and control flow accurately.
Properly designed flow measurement systems are compatible with the process or material they
are measuring. They must also be capable of producing the accuracy and repeatability that are
most appropriate for the application.
It is often said that the “ideal flowmeter should be non-intrusive, inexpensive, have absolute
accuracy, infinite repeatability, and run forever without maintenance.” Unfortunately, such a
device does not yet exist, although some manufacturers might claim that it does. Over recent
years, however, many improvements have been made to established systems, and new products
utilizing novel techniques are continually being introduced onto the market. The “ideal”
flowmeter might not in fact be so far away, and now more than ever, potential users must be
fully aware of the systems at their disposal.
Flow measurement on analyzer systems falls into three main categories:
1.
Measuring the flow precisely where the accuracy of the analyzer depends on it
2.
Measuring the flow where it is necessary to know the flow rate but it is not critical (e.g.,
fast loop flow)
3.
Checking that there is flow present but measurement is not required (e.g., cooling water
for heat exchangers).
It is important to decide which category the flowmeter falls into when writing the specification,
as the prices vary over a wide range, depending on the precision required.
The types of flowmeter available will be mentioned but not the construction or method of
operation, as this is covered in Chapter 1.
39.2.7.1 Variable-Orifice Meters
The variable-orifice meter is extensively used in analyzer systems because of its simplicity, and
there are two main types.
Glass Tube
This type is the most common, as the position of the float is read directly on the scale attached to
the tube, and it is available calibrated for liquids or gases. The high-precision versions are
available with an accuracy of ±1 percent full-scale deflection (FSD), whereas the low-priced
units have a typical accuracy of ±5 percent FSD.
Metal Tube
The metal tube type is used mainly on liquids for high-pressure duty or where the liquid is
flammable or hazardous. A good example is the fast loop of a hydrocarbon analyzer. The float
has a magnet embedded in it, and the position is detected by an external follower system. The
accuracy of metal tube flowmeters varies from ±10 percent FSD to ±2 percent FSD, depending
on the type and whether individual calibration is required.
39.2.7.2 Differential-Pressure Devices
On sample systems these normally consist of an orifice plate or preset needle valve to produce
the differential pressure, and are used to operate a gauge or liquid-filled manometer when
indication is required or a differential pressure switch when used as a flow alarm.
39.2.7.3 Spinner or Vane-Type Indicators
In this type the flow is indicated either by the rotation of a spinner or by the deflection of
a vane by the fluid. It is ideal for duties such as cooling water flow, where it is essential to know
that a flow is present but the actual flow rate is of secondary importance.
Future lunar heat flow measurements are currently being studied in the frame of an
international collaboration termed the International Lunar Network, but definite plans for
mission implementation do not exist at present. However, the recently selected Discovery-class
mission InSight, which is due to launch in 2016, will place a geophysical lander carrying a heat
flow probe in the southern Elysium region of Mars in September 2016. The heat flow
probe, termed the Heat flow and Physical Properties Package, or HP3 for short, is built to access
the Martian regolith to a depth of up to 5 m by means of a hammering mechanism, emplacing a
suite of temperature sensors into the subsurface. The overall measurement approach is similar to
that taken by Bullard or Langseth, and a depth-resolved measurement of the subsurface
temperatures will be used to determine the thermal gradient. Active heating elements inside
HP3 will be used to determine the thermal conductivity in situ, and the attenuation of the annual
temperature wave will be used to independently estimate thermal diffusivity and to provide a
consistency check for the thermal conductivity value determined from active heating.
While a single heat flow measurement is hardly enough to confidently constrain the average heat
flow from a planet, the InSight measurement will provide an important baseline. In addition, the
seismic experiment on InSight will provide an estimate of crustal thickness, which can be used to
validate thickness models derived from gravity data. The thickness of the crust is a key
constraint needed to interpret local heat flow measurements in terms of the global average.
Furthermore, the heat flow pattern on the Martian surface is expected to be much simpler than
that of either the Earth or the Moon for two reasons: First, Mars currently lacks a plate
tectonics cycle (although it may have possessed one during its earliest evolution), and second,
Mars does not show any geochemically anomalous regions like the PKT of the Moon
(compare Figure 55.11). Therefore, a first global estimate of the average heat flow can be
derived from the InSight data, but further measurements in different locations are clearly
desirable.
Another device built to measure the energy balance at the surface of
an extraterrestrial body is the Rosetta MUPUS instrument (MUlti
PUrpose sensor for Subsurface observation), currently on its way
to comet 67p/Churyumov–Gerasimenko. Its goal is to measure the
heat flow into the comet, the largest unknown contribution to the
surface energy balance of the comet. The instrument will be delivered
to the cometary surface onboard the Rosetta Philae Lander and is
shown in Figure 55.12. MUPUS consists of a 35-cm-long rod
equipped with temperature sensors and heaters, and a hammering
mechanism is mounted on top of the rod to emplace it into the ground.
The instrument will then determine the surface temperature,
subsurface thermal gradient, as well as the thermophysical
properties of the cometary regolith, thus quantifying the heat flow into
the comet and help to detail the solar energy input responsible for
driving activity and liberating gas and dust to the cometary coma and
tail. Other measurement approaches which have been proposed to
determine heat flow on extraterrestrial bodies include the so-called
flux plates, which can measure surface heat flow in environments with
constant surface temperatures. Flux plates are placed on
the planetary surface from which the heat flow is to be measured, and
the temperature difference across a layer of known thermal
conductivity is recorded. Such a device would be suited to determine
the heat flow on Venus, whose rocky surface precludes drilling to any
significant depth. The constant cloud cover and very dense
atmosphere of Venus result in very stable surface temperatures, such
that a flux plate measurement could be successfully executed there.
Such a measurement would provide a very important constraint on
how Earth-sized planets without plate tectonics lose their heat.
Before considering clamp-on flow measurement of gases, a brief
review about clamp-on for liquids may be in order (Section II.B.2.b).
Regarding liquids (e.g., water) in plastic pipe, it has been known for a
long time that plastic pipe’s low sound speed c2 compared to the
sound speed c3 in water makes it easy to obtain a long axial
interaction length L. (Lake, 1962; Lynnworth, 1967, p. 275 or 1979, p.
473.) The low c2 also makes it easy to utilize an off-diameter path
(Lynnworth, 1967, Figure 15; 1979, p. 434). Also, the transmission of
acoustic energy between the liquid and the pipe wall is relatively
efficient, compared to liquid in a steel pipe. This is because the
acoustic impedance Z2 of the plastic pipe is usually within a factor of
three of the acoustic impedance in the liquid, Z3. In contrast, for water
in steel pipe, Z2/Z3 ˜ 30 Despite this unfavorable impedance ratio,
clamp-on flowmeter manufacturers (see Figures 45 and 48) are able
to measure flow of liquids in steel pipes and tubes down to the order
of 1 cm diameter, and up to several meters in diameter, typically to an
accuracy of one or a few percent of reading over normal flow rates.
For plastic (low-impedance) pipe, this means one can expect a very
high SNR (signal-to-noise ratio) and a very high refracted angle ?
3 for ultrasonic measurements of liquid flow in plastic pipe.

Lecture 5
Mass transfer
Mass transfer is the net movement of mass from one location, usually meaning
stream, phase, fraction or component, to another. Mass transfer occurs in many
processes, such as absorption, evaporation, drying, precipitation, membrane
filtration, and distillation. Mass transfer is used by different scientific disciplines
for different processes and mechanisms. The phrase is commonly used
in engineering for physical processes that
involve diffusive and convective transport of chemical species within physical
systems.
Some common examples of mass transfer processes are
the evaporation of water from a pond to the atmosphere, the purification of blood
in the kidneys and liver, and the distillation of alcohol. In industrial processes,
mass transfer operations include separation of chemical components in distillation
columns, absorbers such as scrubbers or stripping, adsorbers such as activated
carbon beds, and liquid-liquid extraction. Mass transfer is often coupled to
additional transport processes, for instance in industrial cooling towers. These
towers couple heat transfer to mass transfer by allowing hot water to flow in
contact with air. The water is cooled by expelling some of its content in the form
of water vapour.
Mass transfer plays a vital role in many reaction systems. As the
distance between the reactants and the site of reaction becomes
greater, the rate of mass transfer is more likely to influence or control
the conversion rate. Taking again the example of oxygen in aerobic
cultures, if mass transfer of oxygen from the bubbles is slow, the rate
of cell metabolism will become dependent on the rate of oxygen
supply from the gas phase. Because oxygen is a critical component of
aerobic fermentations and is so sparingly soluble in aqueous
solutions, much of our interest in mass transfer lies with the transfer of
oxygen across gas–liquid interfaces. However, liquid–solid mass
transfer can also be important in systems containing clumps, pellets,
flocs, or films of cells or enzymes. In these cases, nutrients in the
liquid phase must be transported into the solid before they can be
utilised in reaction. Unless mass transfer is rapid, the supply of
nutrients will limit the rate of biological conversion. High capacity with high
liquid rates or high viscosity.
Trayed columns employ the energy of the vapor to create a mass-transfer surface area by
bubbling through the liquid. Packed towers, with the aid of gravity, create a mass-transfer
surface area by the action of the liquid falling over the packing. Thus, there are no downcomers
in a packed tower, and 100% of the tower cross section is used for mass transfer.
High capacity/efficiency combinations.
Because the capacity of a packed tower is greater than a comparable sized trayed one, a smaller,
more efficient packing can be used to handle the same capacity. The range of packing sizes and
types allows the combination of efficiency and capacity to be optimized.
High capacity in foaming system.
Trayed columns use the continuous liquid phase to create a froth that is difficult to separate.
Packed towers make the vapor phase continuous, and the liquid phase discontinuous.
Low pressure drop.
Packed towers have a low ΔP per theoretical stage or transfer unit, which is beneficial in low-
pressure and vacuum applications.
Low residence time.
Packed towers offer low liquid holdup. This gives lower residence time for materials that are
sensitive to high processing temperature. In contrast, trayed columns typically impose a holdup
volume of 20-30%.
Mass transport in polymer or solid phases is much slower than
in aqueous electrolytes. The process is nevertheless important for
many applications like the charging and discharging of batteries.
Control and enhancement of the charge transfer by diffusion or
migration is a big problem in electrochemical kinetics. There is a
continuous decrease of agitation in the layer formed here from the
bulk of the electrolyte to the electrode surface. If an electrode process
occurs on the electrode surface, in this layer of reduced influence from
stirring, a concentration gradient builds up between the surface
(concentration cs) and the electrolyte bulk (concentration c0). The
concentration gradient defines a layer first described by Nernst and by
Brunner and is called diffusion layer d. The typical order of magnitude
in aqueous electrolytes is 103 cm. The reason for this layer formation
in a stirred electrolyte is viscosity. While the concentration outside this
layer is always kept equal to the bulk concentration by the flow of the
electrolyte, the flow rate of the electrolyte in the layer is continuously
decreasing and therefore approaches zero at the electrode surface.
The flow rate decrease is linear for laminar flow and nonlinear for
turbulent flow. The Reynolds number describes the border between
laminar and turbulent flow. Mass Transfer Between Phases
Mass transfer between phases in a fluidized bed is illustrated in Fig.
8.9. For gas interchange between bubbles and cloud, the mass
transfer coefficient Kbc, which has a unit of s−1, is defined in the
following manner:

The mass transfer process in porous media includes the following two
aspects (Shao et al., 2006): molecular diffusion and convective mass
transfer. Molecular diffusion is caused by the random motion of the
fluid molecules or solid microscopic particles. It corresponds to
the heat conduction mechanism in the heat transfer process.
Convective mass transfer is caused by the macroscopic motion of
fluid; and corresponds to convective heat transfer. Briefly, Convective
mass transfer includes both the mass transfer between the fluid and
the solid body wall and the convective mass transfer between two
immiscible fluids (including a gas-liquid phase). Single-phase fluid
convective mass transfer is divided into laminar and turbulent flow
according to different fluid states. The gas-liquid two-phase flow (i.e.,
nonsaturated flow in porous media) has more and different forms of
convective mass transfer. Obviously, the macroscopic motion of the
fluid in the gap is caused by the capillary force, pressure, gravity, and
so on. It should be pointed out that there is mutual influence and a
coupling effect among the transfer processes of momentum, energy,
and mass in porous media. In recent years, some scholars have
summarized their research work in porous media and suggested that
studies on heat and mass transfer in a porous medium should focus
on the following aspects:
1.
Combine the macro and micro aspects of research, taking
theoretical analysis, experimental research, and numerical
simulation as a means to establish and improve the micro-
and macromodel of the porous medium.
2.
Develop measurement principles and methods, especially
measurement technologies for heat and moisture
transfer characteristics in porous media; enrich and improve the
basic database of porous media; and explore methods for
measurement of the permeability, porosity, capillary force,
surface tension, and contact angle.
3.
Strengthen basic research into heat and mass transfer in porous
media that is at the background of engineering applications,
which also becomes the main research directions of heat and
mass transfer in porous media.
Mass transport is involved in many processes during the manufacture and use of wood and wood
products. Most of the mass transfer in wood occurs under the influence of a total pressure
gradient and/or a diffusive driving force. The former is the primary mode of mass transfer in the
penetration of preservatives and other chemicals during pressure treatment, the entry of pulping
liquor during chemical pulping in digesters, the movement of capillary water during drying
above the fiber saturation point, the flow of steam during hot pressing of composites, and the
flow of water vapor during superheated steam drying. Molecular diffusion is the primary mode
of transfer of water during drying of wood below the fiber saturation point, of chemicals during
preservative treatment by dip diffusion, of water during sorption of moisture as
wood equilibrates with the water in the air, and of fumigant vapor as it spreads from the point of
application to other parts of the wood.
LECTURE 6
CHEMICAL PROCESSING MODELING
Chemical process modeling is a computer modeling technique used
in chemical engineering process design. It typically involves using purpose-built
software to define a system of interconnected components, [1] which are then solved
so that the steady-state or dynamic behavior of the system can be predicted. The
system components and connections are represented as a process flow diagram.
[1]
Simulations can be as simple as the mixing of two substances in a tank, or as
complex as an entire alumina refinery.[2]
Chemical process modeling requires a knowledge of the properties of the
chemicals involved in the simulation, [1] as well as the physical properties and
characteristics of the components of the system, such as tanks, pumps, pipes,
pressure vessels, and so on.
Process simulation is a model-based representation
of chemical, physical, biological, and other technical processes and unit
operations in software. Basic prerequisites are a thorough knowledge of chemical
and physical properties[1] of pure components and mixtures, of reactions, and of
mathematical models which, in combination, allow the calculation of a process in
computers.
Process simulation software describes processes in flow diagrams where unit
operations are positioned and connected by product or educt streams. The software
has to solve the mass and energy balance to find a stable operating point. The goal
of a process simulation is to find optimal conditions for an examined process. This
is essentially an optimization problem which has to be solved in an iterative
process.
Process simulation always use models which introduce approximations and
assumptions but allow the description of a property over a wide range of
temperatures and pressures which might not be covered by real data. Models also
allow interpolation and extrapolation - within certain limits - and enable the search
for conditions outside the range of known properties.
The development of models[2] for a better representation of real processes is the
core of the further development of the simulation software. Model development is
done on the chemical engineering side but also in control engineering and for the
improvement of mathematical simulation techniques. Process simulation is
therefore one of the few fields where scientists from chemistry, physics, computer
science, mathematics, and several engineering fields work together.
VLE of the mixture of Chloroform and Methanol plus NRTL fit
and extrapolation to different pressures
A lot of efforts are made to develop new and improved models for the calculation
of properties. This includes for example the description of

 thermophysical properties like vapor pressures, viscosities, caloric data, etc. of


pure components and mixtures
 properties of different apparatuses like reactors, distillation columns, pumps,
etc.
 chemical reactions and kinetics
 environmental and safety-related data
Two main different types of models can be distinguished:

1. Rather simple equations and correlations where parameters are fitted to


experimental data.
2. Predictive methods where properties are estimated.
The equations and correlations are normally preferred because they describe the
property (almost) exactly. To obtain reliable parameters it is necessary to have
experimental data which are usually obtained from factual data banks [3][4] or, if no
data are publicly available, from measurements.
Using predictive methods is much cheaper than experimental work and also than
data from data banks. Despite this big advantage predicted properties are normally
only used in early steps of the process development to find first approximate
solutions and to exclude wrong pathways because these estimation methods
normally introduce higher errors than correlations obtained from real data.
Process simulation also encouraged the further development of mathematical
models in the fields of numerics and the solving of complex problems.[5][6
The history of process simulation is strongly related to the development of
the computer science and of computer hardware and programming languages.
Early working simple implementations of partial aspects of chemical processes
were introduced in the 1970s when suitable hardware and software (here mainly
the programming languages FORTRAN and C) became available. The modelling
of chemical properties began much earlier, notably the cubic equation of states and
the Antoine equation were precursory developments of the 19th century. Initially
process simulation was used to simulate steady state processes. Steady-state
models perform a mass and energy balance of a stationary process (a process in an
equilibrium state) it does not depend on time.
Dynamic simulation is an extension of steady-state process simulation whereby
time-dependence is built into the models via derivative terms i.e. accumulation of
mass and energy. The advent of dynamic simulation means that the time-dependent
description, prediction and control of real processes in real time has become
possible. This includes the description of starting up and shutting down a plant,
changes of conditions during a reaction, holdups, thermal changes and more.
Dynamic simulations require increased calculation time and are mathematically
more complex than a steady state simulation. It can be seen as a multiply repeated
steady state simulation (based on a fixed time step) with constantly changing
parameters.
Dynamic simulation can be used in both an online and offline fashion. The online
case being model predictive control, where the real-time simulation results are
used to predict the changes that would occur for a control input change, and the
control parameters are optimised based on the results. Offline process simulation
can be used in the design, troubleshooting and optimisation of process plant as well
as the conduction of case studies to assess the impacts of process modifications.
Dynamic simulation is also used for operator training.
Computer simulation is the process of mathematical modelling, performed on
a computer, which is designed to predict the behaviour of and/or the outcome of a
real-world or physical system. Since they allow to check the reliability of chosen
mathematical models, computer simulations have become a useful tool for the
mathematical modeling of many natural systems in physics (computational
physics), astrophysics, climatology, chemistry, biology and manufacturing, as well
as human systems in economics, psychology, social science, health
care and engineering. Simulation of a system is represented as the running of the
system's model. It can be used to explore and gain new insights into
new technology and to estimate the performance of systems too complex
for analytical solutions.[1]
Computer simulations are realized by running computer programs that can be
either small, running almost instantly on small devices, or large-scale programs
that run for hours or days on network-based groups of computers. The scale of
events being simulated by computer simulations has far exceeded anything
possible (or perhaps even imaginable) using traditional paper-and-pencil
mathematical modeling. Over 10 years ago, a desert-battle simulation of one force
invading another involved the modeling of 66,239 tanks, trucks and other vehicles
on simulated terrain around Kuwait, using multiple supercomputers in
the DoD High Performance Computer Modernization Program. [2] Other examples
include a 1-billion-atom model of material deformation; [3] a 2.64-million-atom
model of the complex protein-producing organelle of all living organisms,
the ribosome, in 2005;[4] a complete simulation of the life cycle of Mycoplasma
genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in
May 2005 to create the first computer simulation of the entire human brain, right
down to the molecular level.[5]
The external data requirements of simulations and models vary widely. For some,
the input might be just a few numbers (for example, simulation of a waveform of
AC electricity on a wire), while others might require terabytes of information (such
as weather and climate models).
Input sources also vary widely:

 Sensors and other physical devices connected to the model;


 Control surfaces used to direct the progress of the simulation in some way;
 Current or historical data entered by hand;
 Values extracted as a by-product from other processes;
 Values output for the purpose by other simulations, models, or processes.
Lastly, the time at which data is available varies:

 "invariant" data is often built into the model code, either because the value is
truly invariant (e.g., the value of π) or because the designers consider the value
to be invariant for all cases of interest;
 data can be entered into the simulation when it starts up, for example by
reading one or more files, or by reading data from a preprocessor;
 data can be provided during the simulation run, for example by a sensor
network.
Because of this variety, and because diverse simulation systems have many
common elements, there are a large number of specialized simulation languages.
The best-known may be Simula (sometimes called Simula-67, after the year 1967
when it was proposed). There are now many others.
Computer models can be classified according to several independent pairs of
attributes, including:

 Stochastic or deterministic (and as a special case of deterministic, chaotic) – see


external links below for examples of stochastic vs. deterministic simulations
 Steady-state or dynamic
 Continuous or discrete (and as an important special case of discrete, discrete
event or DE models)
 Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-
body mechanical systems (described primarily by DAE:s) or dynamics
simulation of field problems, e.g. CFD of FEM simulations (described by
PDE:s).
 Local or distributed.
Another way of categorizing models is to look at the underlying data structures.
For time-stepped simulations, there are two main classes:

 Simulations which store their data in regular grids and require only next-
neighbor access are called stencil codes. Many CFD applications belong to this
category.
 If the underlying graph is not a regular grid, the model may belong to
the meshfree method class.
Equations define the relationships between elements of the modeled system and
attempt to find a state in which the system is in equilibrium. Such models are often
used in simulating physical systems, as a simpler modeling case before dynamic
simulation is attempted.

 Dynamic simulations model changes in a system in response to (usually


changing) input signals.
 Stochastic models use random number generators to model chance or random
events;
 A discrete event simulation (DES) manages events in time. Most computer,
logic-test and fault-tree simulations are of this type. In this type of simulation,
the simulator maintains a queue of events sorted by the simulated time they
should occur. The simulator reads the queue and triggers new events as each
event is processed. It is not important to execute the simulation in real time. It
is often more important to be able to access the data produced by the simulation
and to discover logic defects in the design or the sequence of events.
 A continuous dynamic simulation performs numerical solution of differential-
algebraic equations or differential equations (either partial or ordinary).
Periodically, the simulation program solves all the equations and uses the
numbers to change the state and output of the simulation. Applications include
flight simulators, construction and management simulation games, chemical
process modeling, and simulations of electrical circuits. Originally, these kinds
of simulations were actually implemented on analog computers, where the
differential equations could be represented directly by various electrical
components such as op-amps. By the late 1980s, however, most "analog"
simulations were run on conventional digital computers that emulate the
behavior of an analog computer.
 A special type of discrete simulation that does not rely on a model with an
underlying equation, but can nonetheless be represented formally, is agent-
based simulation. In agent-based simulation, the individual entities (such as
molecules, cells, trees or consumers) in the model are represented directly
(rather than by their density or concentration) and possess an internal state and
set of behaviors or rules that determine how the agent's state is updated from
one time-step to the next.
 Distributed models run on a network of interconnected computers, possibly
through the Internet. Simulations dispersed across multiple host computers like
this are often referred to as "distributed simulations". There are several
standards for distributed simulation, including Aggregate Level Simulation
Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level
Architecture (simulation) (HLA) and the Test and Training Enabling
Architecture (TENA).

Lecture7

: Heat Exchangers

1. PROCESS DESIGN OF SHELL AND TUBE EXCHANGER FOR SINGLE


PHASE HEAT TRANSFER 1.1. Classification of heat exchangers Transfer of
heat from one fluid to another is an important operation for most of the
chemical industries. The most common application of heat transfer is in
designing of heat transfer equipment for exchanging heat from one fluid to
another fluid. Such devices for efficient transfer of heat are generally called
Heat Exchanger. Heat exchangers are normally classified depending on the
transfer process occurring in them. General classification of heat exchangers is
shown in the Figure 1.1. Amongst of all type of exchangers, shell and tube
exchangers are most commonly used heat exchange equipment. The common
types of shell and tube exchangers are: Fixed tube-sheet exchanger (non-
removable tube bundle): The simplest and cheapest type of shell and tube
exchanger is with fixed tube sheet design. In this type of exchangers the tube
sheet is welded to the shell and no relative movement between the shell and
tube bundle is possible (Figure 1.2). Removable tube bundle: Tube bundle may
be removed for ease of cleaning and replacement. Removable tube bundle
exchangers further can be categorized in floatinghead and U-tube exchanger. 
Floating-head exchanger: It consists of a stationery tube sheet which is
clamped with the shell flange. At the opposite end of the bundle, the tubes may
expand into a freely riding floating-head or floating tube sheet. A floating head
cover is bolted to the tube sheet and the entire bundle can be removed for
cleaning and inspection of the interior. This type of exchanger is shown in
Figure 1.3.  U-tube exchanger: This type of exchangers consists of tubes
which are bent in the form of a „U‟ and rolled back into the tube sheet shown
in the Figure 1.4. This means that it will omit some tubes at the centre of the
tube bundle NPTEL – Chemical Engineering – Chemical Engineering Design -
II Joint initiative of IITs and IISc – Funded by MHRD Page 3 of 41 depending
on the tube arrangement. The tubes can expand freely towards the „U‟ bend
end. The different operational and constructional advantages and limitations
depending on applications of shell and tube exchangers are summarized in
Table 1.1. TEMA (USA) and IS: 4503-1967 (India) standards provide the
guidelines for the mechanical design of unfired shell and tube heat exchangers.
As shown in the Table 1.1, TEMA 3-digit codes specify the types of front-end,
shell, and rear-end of shell and tube exchangers.

Heat Transfer Mechanism

There are two types of heat transfer mechanisms employed by heat exchangers—single-phase or
two-phase heat transfer.

In single-phase heat exchangers, the fluids do not undergo any phase change throughout the heat
transfer process, meaning that both the warmer and cooler fluids remain in the same state of
matter at which they entered the heat exchanger. For example, in water-to-water heat transfer
applications, the warmer water loses heat which is then transferred to the cooler water and
neither change to a gas or solid.

On the other hand, in two-phase heat exchangers, fluids do experience a phase change during the
heat transfer process. The phase change can occur in either or both of the fluids involved
resulting in a change from a liquid to a gas or a gas to a liquid. Typically, devices which employ
a two-phase heat transfer mechanism require more complex design considerations than ones
which employ a single-phase heat transfer mechanism. Some of the types of two-phase heat
exchangers available include boilers, condensers, and evaporators.

Types of Heat Exchangers


Based on the design characteristics indicated above, there are several different variants of heat
exchangers available. Some of the more common variants employed throughout industry
include:

 Shell and tube heat exchangers


 Double pipe heat exchangers
 Plate heat exchangers
 Condensers, evaporators, and boilers
Shell and Tube Heat Exchangers
The most common type of heat exchangers, shell and tube heat exchangers are constructed of a
single tube or series of parallel tubes (i.e., tube bundle) enclosed within a sealed, cylindrical
pressure vessel (i.e., shell). The design of these devices is such that one fluid flows through the
smaller tube(s), and the other fluid flows around its/their outside(s) and between it/them within
the sealed shell. Other design characteristics available for this type of heat exchanger
include finned tubes, single- or two-phase heat transfer, countercurrent flow, cocurrent flow, or
crossflow arrangements, and single, two, or multiple pass configurations.

Some of the types of shell and tube heat exchangers available include helical coil heat
exchangers and double pipe heat exchangers, and some of the applications include preheating, oil
cooling, and steam generation.

A close-up view
of a heat exchanger tube bundle.

Image Credit: Anton Moskvitin/Shutterstock.com

Double Pipe Heat Exchangers

A form of shell and tube heat exchanger, double pipe heat exchangers employ the simplest heat
exchanger design and configuration which consists of two or more concentric, cylindrical pipes
or tubes (one larger tube and one or more smaller tubes). As per the design of all shell and tube
heat exchangers, one fluid flows through the smaller tube(s), and the other fluid flows around the
smaller tube(s) within the larger tube.

The design requirements of double pipe heat exchangers include characteristics from the
recuperative and indirect contact types mentioned previously as the fluids remain separated and
flow through their own channels throughout the heat transfer process. However, there is some
flexibility in the design of double pipe heat exchangers, as they can be designed with cocurrent
or countercurrent flow arrangements and to be used modularly in series, parallel, or series-
parallel configurations within a system. For example, Figure 4, below, depicts the transfer of
heat within an isolated double pipe heat exchanger with a cocurrent flow configuration.

Figure 4 – Heat Transfer in a Double Pipe Heat Exchanger


Plate Heat Exchangers

Also referred to as plate type heat exchangers, plate heat exchangers are constructed of several
thin, corrugated plates bundled together. Each pair of plates creates a channel through which one
fluid can flow, and the pairs are stacked and attached—via bolting, brazing, or welding—such
that a second passage is created between pairs through which the other fluid can flow.

The standard plate design is also available with some variations, such as in plate fin or pillow
plate heat exchangers. Plate fin exchangers employ fins or spacers between plates and allow for
multiple flow configurations and more than two fluid streams to pass through the device. Pillow
plate exchangers apply pressure to the plates to increase the heat transfer efficiency across the
surface of the plate. Some of the other types available include plate and frame, plate and shell,
and spiral plateheat exchangers.

A close-up view
of a plate type heat exchanger.

Image Credit: withGod/Shutterstock.com

Condensers, Evaporators, and Boilers


Boilers, condensers, and evaporators are heat exchangers which employ a two-phase heat
transfer mechanism. As mentioned previously, in two-phase heat exchangers one or more fluids
undergo a phase change during the heat transfer process, either changing from a liquid to a gas or
a gas to a liquid.

Condensers are heat exchanging devices which take heated gas or vapor and cool it to the point
of condensation, changing the gas or vapor into a liquid. On the other hand,
in evaporators and boilers, the heat transfer process changes the fluids from liquid form to gas or
vapor form.

Other Heat Exchanger Variants

Heat exchangers are employed in a variety of applications across a wide range of industries.
Consequently, there are several variants of heat exchangers available, each suitable for the
requirements and specifications of a particular application. Beyond the variants mentioned
above, other types available include air cooled heat exchangers, fan cooled heat exchangers, and
adiabatic wheel heat exchangers.

Heat Exchanger Selection Considerations


While there are a wide variety of heat exchangers available, the suitability of each type (and its
design) in transferring heat between fluids is dependent on the specifications and requirements of
the application. Those factors largely determine the optimal design of the desired heat exchanger
and influence the corresponding rating and sizing calculations.

Some of the factors that industry professionals should keep in mind when designing and
choosing a heat exchanger include:

 The type of fluids, the fluid stream, and their properties


 The desired thermal outputs
 Size limitations
 Costs
Fluid Type, Stream, and Properties

The specific type of fluids—e.g., air, water, oil, etc.—involved and their physical, chemical, and
thermal properties—e.g., phase, temperature, acidity or alkalinity, pressure and flow rate, etc.—
help determine the flow configuration and construction best suited for that particular heat
transfer application.

For example, if corrosive, high temperature, or high pressure fluids are involved, the heat
exchanger design must be able to withstand the high stress conditions throughout the heating or
cooling process. One method of fulfilling these requirements is by choosing construction
materials which hold the desired properties: graphite heat exchangers exhibit high thermal
conductivity and corrosion resistance, ceramic heat exchangers can handle temperatures higher
than many commonly used metals’ melting points, and plastic heat exchangers offer a low-cost
alternative which maintains a moderate degree of corrosion resistance and thermal conductivity.
Another method is by choosing a design suited for the fluid properties: plate heat exchangers are
capable of handling low to medium pressure fluids but at higher flow rates than other types of
heat exchangers, and two-phase heat exchangers are necessary when handling fluids which
require a phase change throughout the heat transfer process. Other fluid and fluid stream
properties that industry professionals may keep in mind when choosing a heat exchanger include
fluid viscosity, fouling characteristics, particulate matter content, and presence of water-soluble
compounds.

Thermal Outputs

The thermal output of a heat exchanger refers to the amount of heat transferred between fluids
and the corresponding temperature change at the end of the heat transfer process. The
transference of heat within the heat exchanger leads to a change of temperature in both fluids,
lowering the temperature of one fluid as heat is removed and raising the temperature of the other
fluid as heat is added. The desired thermal output and rate of heat transfer help determine the
optimal type and design of heat exchanger as some heat exchanger designs offer greater heater
transfer rates and can handle higher temperatures than other designs, albeit at a higher cost.

Size Limitations

After choosing the optimal type and design of a heat exchanger, a common mistake is purchasing
one that is too big for the given physical space. Oftentimes, it is more prudent to purchase a heat
exchanging device in a size which leaves room for further expansion or addition, rather than
choosing one which fully encompasses the space. For applications with limited space, such as in
airplanes or automobiles, compact heat exchangers offer high heat transfer efficiencies in
smaller, more lightweight solutions. Characterized by high heat transfer surface area to volume
ratios, several variants of these heat exchanging devices are available, including compact plate
heat exchangers. Typically, these devices feature ratios of ≥700 m2/m3 for gas-to-gas applications
and ≥400 m2/m3for liquid-to-gas applications.

Costs

The cost of a heat exchanger includes not only the initial price of the equipment, but the
installation, operational, and maintenance costs over the device’s lifespan as well. While it is
necessary to choose a heat exchanger which effectively fulfills the requirements of the
applications, it is also important to keep in mind the overall costs of the chosen heat exchanger to
better determine whether the device is worth the investment. For example, an initially expensive,
but more durable heat exchanger may result in lower maintenance costs and, consequently, less
overall spend over the courses of a few years, while a cheaper heat exchanger may be initially
less expensive, but require several repairs and replacements within the same period of time.

Design Optimization

Designing the optimal heat exchanger for a given application (with particular specifications and
requirements as indicated above) involves determining the temperature change of the fluids, the
heat transfer coefficient, and the construction of the heat exchanger and relating them to the rate
of heat transfer. The two main problems which arise in pursuing this objective are calculating the
device’s rating and sizing.

The rating refers to the calculation of the thermal effectiveness (i.e., efficiency) of a heat
exchanger of a given design and size, including the rate of heat transfer, the amount of heat
transferred between fluids and their corresponding temperature change, and the total pressure
drop across the device. The sizing refers to the calculation of the required total dimensions of the
heat exchanger (i.e., the surface area available for use in the heat transfer process), including the
length, width, height, thickness, number of components, component geometries and
arrangements, etc., for an application with given process specifications and requirements. The
design characteristics of a heat exchanger—e.g., flow configuration, material, construction
components and geometry, etc.—affect both the rating and sizing calculations. Ideally, the
optimal heat exchanger design for an application finds a balance (with factors optimized as
specified by the designer) between the rating and sizing which satisfies the process specifications
and requirements at the minimum necessary cost.

Applications of Heat Exchangers


Heat exchangers are devices used throughout industry for both heating and cooling processes.
Several variants of heat exchangers are available and find application in a wide range of
industries, including:

 ASME heat exchangers


 Automotive heat exchangers (typically as car radiators)
 Brewery heat exchangers
 Chemical heat exchangers
 Cryogenic heat exchangers
 Marine heat exchangers
 Power generation heat exchangers
 Refrigeration heat exchangers

Table 1, below, indicates some of the common industries and applications of the types of heat
exchangers previously mentioned.

Table 1 – Industries and Applications of Heat Exchangers by Type

Type of Heat Exchanger Common Industries and Applications


Shell and Tube  Oil refining
 Preheating
 Oil cooling
 Steam generation
 Boiler blowdown heat recovery
 Vapor recovery systems
 Industrial paint systems
Double Pipe  Industrial cooling processes
 Small heat transfer area requirements
Plate  Cryogenic
 Food processing
 Chemical processing
 Furnaces
 Closed loop to open loop water cooling
Condensers  Distillation and refinement processes
 Power plants
 Refrigeration
 HVAC
 Chemical processing
Evaporators/Boilers  Distillation and refinement processes
 Steam trains
 Refrigeration
 HVAC
Air Cooled/Fan Cooled  Limited access to cooling water
 Chemical plants and refineries
 Engines
 Power plants
Adiabatic Wheel  Chemical and petrochemical processing
 Petroleum refineries
 Food processing and pasteurization
 Power generation
 Cryogenics
 HVAC
 Aerospace

You might also like