Wednesday 31 January 2018

DIFFERENCE BETWEEN GAUGE PRESSURE AND ABSOLUTE PRESSURE EXPLAINED !!!

Introduction:

Pressure is the force per unit area applied in a direction perpendicular to the surface of an object. Mathematically, it is symbolized with a ‘P’. To put it briefly, it is the amount of force acting on a unit area. The simple formula for pressure is:
P = F / A; where P = pressure
F = force
A = area
The SI unit for pressure is in Pascals (Pa). Other non-SI units are PSI and bar.
There are two kinds of references to measure pressure ‘“ the gauge pressure and the absolute pressure. 

Absolute Pressure:

The actual pressure at a given position is called the absolute pressure and it is measured relative to absolute vacuum. One concept should be taken into consideration is that to measure any quantity we require a base line with respect we are going to measure it.

To learn this concept let us take an example, suppose we need to measure distance of Chennai.Distance can be measured in meter. Can we measure distance of Chennai by this input? Obviously your answer is no because we need a reference from which we want to measure distance. Now suppose we need to measure distance of Chennai from Delhi. Now we are able to measure this distance in some meters or kilometers.

Similarly pressure cannot be measured without a reference. When we take vacuum or no pressure condition as reference, the measured pressure is called absolute pressure.

Gauge Pressure:

When we take atmospheric pressure as reference to measure pressure of any system, the measured pressure is known as gauge pressure. Most of pressure devices work in atmospheric condition always measure gauge pressure. We can convert this gauge pressure in absolute pressure by adding atmospheric pressure in gauge pressure. 

It should be noted that atmospheric pressure may vary, depending on many factors, such as locality. Altitude and temperature are essential factors. The standard atmospheric pressure (1 ATM) is about 14.7 PSI.

P (absolute) =  P (Gauge) +  P (Atmospheric)

Most of gauge read zero in atmosphere but there is some atmospheric pressure. They read atmospheric pressure as absolute zero pressure. Pressure below atmospheric pressure is called vacuum pressure and is measured by vacuum gauges that indicate the difference between the atmospheric pressure and absolute pressure.

P (vacuum) = P (Atmospheric) – P (Absolute)

Machines like air compressors, well pumps, and tire gauges will all use gauge pressure. 

Summary:

1. Absolute pressure is measured in relation to the vacuum, while gauge pressure is the difference between the absolute pressure and the atmospheric pressure.
2. Absolute pressure uses absolute zero as it’s zero point, while gauge pressure uses atmospheric pressure as it’s zero point.
3. Gauge pressure is commonly used, while absolute pressure is used for scientific experimentations and calculations.
4. To indicate gauge pressure, a ‘g’ is placed after the unit. Absolute pressure, on the other hand, uses the term ‘abs’.
5. Due to varying atmospheric pressure, gauge pressure measurement is not precise, while absolute pressure is always definite.
6. Absolute pressure is sometimes referred to as ‘total systems pressure’, while gauge pressure is sometimes called ‘overpressure’.

Tuesday 30 January 2018

IMPORTANT TERMS OF MEASUREMENT PROCESS EXPLAINED !!

1.Sensitivity 

It should be noted that sensitivity is a term associated with the measuring equipment whereas accuracy and precision are association with measuring process. Sensitivity means the ability of a measuring device to detect small differences in a quantity being measured. For instance if a very small change in voltage applied to 2 voltmeters results in a perceptible change in the indication of one instrument and not in the other. Then the former (A0 is send to be more sensitive. Numerically it can be determined in this way for example if on a dial indicator the scale spacing is 1.0 mm and the scale division value is 0.01 mm then sensitivity =100. it is also called amplification factor or gearing ratio.

2.Readability

Readability refers to the case with which the readings of a measuring instrument can be read. It is the susceptibility of a measuring device to have its indication converted into more meaningful number. Fine and widely spaced graduation lines ordinarily improve the readability. If the graduation lines are very finely spaced the scale will be more readable by using the microscope however with naked eye the readability will be poor.
In order to make micrometer more readable they are provided with vernier scale. It can also be improve by using magnifying devices.

3.Repeatability

It is the ability of the measuring instrument to repeat the same results when measurement are
carried out
  • By same observer
  • With the same instrument
  • Under the same conditions
  • Without any change in location
  • Without change in the method of measurement
  • And the measurement is carried out in short interval of time.
It may be expressed quantitatively in terms of dispersion of the results.

4. Reproducibility

Reproducibility is the consistency of pattern of variation in measurement i.e closeness of the agreement between the results of measurement of the same quantity when individual measurement are carried out
  1. By different observer
  2. By different methods
  3. Using different instruments
  4. Under different condition, location and times.
It may also be expressed quantitatively in terms of dispersion of the results.

5.Calibration

  • The calibration of any measuring instrument is necessary for the sake of accruing of measurement process. It is the process of framing the scale of the instrument by applying some standard (known) signals calibration is a pre-measurement process generally carried out by manufactures.
  • It is carried out by making adjustment such that the read out device produces zero output for zero measured input similarly it should display output equipment to the known measured input near the full scale input value.
  • If accuracy is to be maintained the instrument must be checked and recalibration if necessary.
  • As far as possible the calibration should be performed under similar environmental condition with the environment of actual measurement
6.Magnification

    Magnification means increasing the magnitude of output signal of measuring instrument many times to make it more readable. The degree of magnification should bear some relation to the accuracy of measurement desired and should not be larger than necessary. Generally the greater the magnification the smaller is the range of measurement.

Monday 29 January 2018

DIFFERENCE BETWEEN HOT SPARK PLUG AND COLD SPARK PLUG EXPLAINED !!

Introduction:

In order to ignite air fuel mixture we need heat.In case of diesel engines (compression ignition engines) this head is achieved by the compression of gases.But in case of spark ignition engines we need to have an external source to ignite air fuel mixture because compression is not enough to ignite the mixture.
spark plug is a device for delivering electric current from an ignition system to the combustion chamber of a spark-ignition engine to ignite the compressed fuel/air mixture by an electric spark, while containing combustion pressure within the engine.

There are two types of spark plug :


  • Hot Spark Plug
  • Cold Spark Plug

“Cold” spark plugs normally have a short heat flow path. This results in a very quick rate of heat transfer. Additionally, the short insulator nose found on cold spark plugs has a small surface area, which does not allow for a massive amount of heat absorption.

On the other hand, “hot” spark plugs feature a longer insulator nose as well as a longer heat transfer path. This results in a much slower rate of heat transfer to the surrounding cylinder head.
The heat range of the spark plug must be carefully selected in order to create an optimal thermal performance. If the heat range is not correct, you can expect serious trouble. Typically, the appropriate firing end temperature is  900-1,450 degrees. Below 900 degrees, carbon fouling is possible. Above it, overheating becomes an issue.

Saturday 27 January 2018

MACHINABILITY & MACHINABILITY INDEX EXPLAINED !!!

MACHINABILITY

Machinability is a term indicating how the work material responds to the cutting process. In the most general case good machinability means that material is cut with good surface finish, long tool life, low force and power requirements, and low cost.

MACHINABILITY INDEX

It is a numerical value that designates the degree of difficulty or ease with which a particular material can be machined.

The machinability index KM is defined by

KM = V60/V60R
where ,
  • V60 is the cutting speed for the target material that ensures tool life of 60 min,
  • V60R is the same for the reference material. Reference materials are selected for each group of work materials (ferrous and non-ferrous) among the most popular and widely used brands.
If KM Greater than 1, the machinability of the target material is better that this of the reference material, and vice versa. Note that this system can be misleading because the index is different for different machining processes.
Example: Machinability rating
The reference material for steels, AISI 1112 steel has an index of 1.
For a tool life of 60 min, the AISI 1045 steel should be machined at 0.36 m/s.
Hence, the machinability index for this steel is,
KM = 0.36/0.5 = 0.72.
This index is smaller than 1, therefore, AISI 1045 steel has a worse workability than AISI 1112.

WAYS OF IMPROVING MACHINABILITY INDEX:

The machinability of the work materials can be more or less improved, without sacrificing productivity, by the following ways :
• Favourable change in composition, microstructure and mechanical properties by mixing suitable type and amount of additive(s) in the work material and appropriate heat treatment.

• Proper selection and use of cutting tool material and geometry depending upon the work material and the significant machinability criteria under consideration.

• Proper selection and appropriate method of application of cutting fluid depending upon the tool – work materials, desired levels of productivity i.e., VC and so and also on the primary objectives of the machining work undertaken.

• Proper selection and application of special techniques like dynamic machining, hot machining, cryogenic machining etc, if feasible, economically viable and eco-friendly.

Tuesday 23 January 2018

MODES OF HEAT TRANSFER : CONDUCTION,CONVECTION AND RADIATION EXPLAINED !!

Heat, energy that is transferred from one body to another as the result of a difference in temperature.Heat will always be transferred from higher temperature to lower temperature independent of the mode. The energy transferred is measured in Joules (kcal or Btu). The rate of energy transfer, more commonly called heat transfer, is measured in Joules/second  (kcal/hr or Btu/hr). 

Heat is transferred by three primary modes:

  • Conduction (Energy transfer in a solid)
  • Convection (Energy transfer in a fluid)
  • Radiation (Does not need a material to travel through)

CONDUCTION :

Conduction is the transfer of heat between substances that are in direct contact with each other. The better the conductor, the more rapidly heat will be transferred.If one body is at a higher temperature than the other, the motion of the molecules in the hotter body will vibrate the molecules at the point of contact in the cooler body and consequently result in increase in temperature.   The amount of heat transferred by conduction depends upon the temperature difference, the properties of the material involved, the thickness of the material, the surface contact area, and the duration of the transfer. 

Metals are good conductors of heat, while gaseous substance, having low densities or widely spaced molecules, are poor conductors of heat. Poor conductors of heat are usually called insulators. The measure of the ability of a substance to insulate is its thermal resistance. This is commonly referred to as the R-value (RSI in metric).  The R-value is generally the inverse of the thermal conductivity, the ability to conduct heat.

Typical units of measure for conductive heat transfer are:

Per unit area (for a given thickness)
Metric (SI) :  Watt per square meter (W/m)
Overall 
Metric (SI) :  Watt (W)  or kilowatts (kW)

CONVECTION :


When a fluid, such as air or a liquid, is heated and then travels away from the source, it carries the thermal energy along. This type of heat transfer is called convection. The fluid above a hot surface expands, becomes less dense, and rises.There are two types of convection: natural and forced. In case of natural convection, the fluid in contact with or adjacent to a high temperature body is heated by conduction. As it is heated, it expands, becomes less dense and consequently rises. This begins a fluid motion process in which a circulating current of fluid moves past the heated body, continuously transferring heat away from it. In the case of forced convection, the movement of the fluid is forced by a fan, pump or other external means.  A centralized hot air heating system is a good example of forced convection.  

Units of measure for rate of convective heat transfer are:
Metric (SI) : Watt (W) or kilowatts (kW)

RADIATION:


Radiation is a method of heat transfer that does not rely upon any contact between the heat source and the heated object as is the case with conduction and convection. Heat can be transmitted through empty space by thermal radiation often called infrared radiation. This is a type electromagnetic radiation . No mass is exchanged and no medium is required in the process of radiation. Examples of radiation is the heat from the sun, or heat released from the filament of a light bulb.

Typical units of measure for rate of radiant heat transfer
Metric (SI) ——Watt per square meter 

If you find this article helpful kindly share it with your friends and if you want to add something to it feel free to write in comment box.

Thank You !

Thursday 18 January 2018

DIFFERENCE BETWEEN WELDING AND BRAZING EXPLAINED !!

Welding and brazing are the metal joining process.Type of joining process to be applied for joining two parts depends on many factors.

WELDING :

>Welding is a process in which both the participating metals are metaled and re solidified to complete as one metal. Proper melting of mating parts is a basic criteria to result a sound weld.

BRAZING :

>In case of Brazing both the participating metals are not melted but a third metal of lower melting point is used to be filled in between the two. The solidification of this third metal results the joining. 

>The filler metal is drawn into the gap between the closely fitted surfaces of the joint by capillary action.

>The design of the joint should incorporate a minimum gap into which the braze filler metal will be drawn.

Comparison between welding and brazing :

S.No
Welding
Brazing
1
Welding joints are strongest joints used to bear the load. Strength of the welded portion of joint is usually more than the strength of base metal.
Brazing joints are weaker than welding joints. This can be used to bear the load up to some extent.
2
To join, work pieces need to be heated till their melting point.
Work pieces are heated but below their melting point.
3
Heat cost is involved and high skill level is required.
Cost involved and skill required is lower than welding.
4
Mechanical properties of base metal may change at the joint due to heating and cooling.
 mechanical properties may change at joint but it is almost negligible.
5
Temperature required is 3800°C in welding joints.
Temperature may go to 600°C in brazing joints.
6
Heat treatment is generally required to eliminate undesirable effects of welding.
No heat treatment is required after brazing.
7
No preheating of workpiece is required before welding as it is carried out at high temperature.
Preheating is desirable to make strong joint as brazing is carried out at relatively low temperature.
If you find this article helpful kindly share it with your friends and if you want to add something to it feel free to write in comment box.

Thank You !

Tuesday 16 January 2018

SEAM WELDING EXPLAINED !!

Seam welding is a resistance welding process in which overlapping sheets are joined by local fusion progressively, along a joint, by two rotating the circular electrodes. Fusion takes place because of heat, which is generated, from the resistance to electric current flow through the work parts which are held together under pressure by electrodes.


Principle of Operation (Procedure)
a) The work-pieces to be seam welded are cleaned, overlapped suitably and placed between the two circular electrodes which hold the work-pieces together by the pressure on electrode force.

b) Switch on the coolant supply (in some machines, the electrodes are cooled by external spray of water; in others, the electrodes are cooled by refrigerant fluid that flow inside the working electrodes).

c) Switch on the current supply. As the first current impulse is applied, the power driven circular electrodes are set in rotation and the work-pieces steadily move forward.

d) If the current is put off and on quickly, a continuous fusion zone made up of overlapping nuggets is obtained. It is known as stitch welding.

e) If individual spot welds are obtained by constant and regularly timed interruption of the welding current, the process is known as roll (spot) welding.

Advantages of Seam Welding

  • It can produce gas tight or liquid tight joints.
  • Overlap can be less than spot or projections welds.
  • Several parallel seams may be produced

Disadvantages of Seam Welding 

  • Cost of equipment is high as compared to spot welding set .
  • Welding can be done only along a straight or uniformly curved line.
  • It is difficult to weld thickness greater than 3 mm.

Applications of Seam Welding:

It is used for welding of stainless steels, steels alloys, nickel and its alloys, magnesium alloys etc.

Monday 15 January 2018

DIFFERENCE BETWEEN TURRET AND CAPSTAN LATHE EXPLAINED !!

A lathe is a machine tool that rotates the workpiece about an axis of rotation to perform various operations such as cutting, sanding, knurling, drilling, deformation, facing, turning, with tools that are applied to the workpiece to create an object with symmetry about that axis.
There are basically two broad classification of semi-automatic lathe.They are:
  1. Turret Lathe
  2. Capstan Lathe 

TURRET LATHE :

>The turret lathe is a form of metalworking lathe that is used for repetitive production of duplicate parts, which by the nature of their cutting process are usually interchangeable

>It has additional turret, which is an indexable toolholder that allows multiple cutting operations to be performed, each with a different cutting tool.

>Need for the operator to perform set-up tasks in between, such as installing or uninstalling tools is not required.

>The turret head is directly mounted on the saddle and the saddle slides over the bed ways. (See figure.)
Fig:Turret Lathe
>Saddle is moved to provide feed to the tool.
>They are heavy and durable.
>More feed and depth of cut are provided for machining.
>It is use for mass production of large size equal part.
>It is accommodated with power chucks.

CAPSTAN LATHE :

>The capstan lathe also has a turret which contains multiple cutting tools but in case of capstan lathe the turret head is mounted on the ram and the ram is mounted on the saddle.(See figure)
Fig:Capstan Lathe 
>Saddle is locked at a particular point and the ram is moved to provide feed to the tool.
>They are lighter in construction.
>Only limited amount of feed and depth of cut are provided for machining.
>It is use for mass production of small size equal part.
>It have hand operated collet chucks.


Also see : LATHE AND OPERATIONS ON IT


If you find this article helpful kindly share it with your friends and if you want to add something to it feel free to write in comment box.

Thank You !