Occasionally, developed industries are revolved upside down by innovations. The years of research on robotics and multi-agent systems are joining efforts to deliver fair such an interruption to the substantial handling trade. Despite the fact that autonomous guided vehicles have been used to move materials within warehouses, they have been employed mainly to carry very huge, weighty objects like reels of uncut paper or engine slabs. The convergences of low-cost wireless communications, computational control, as well as robotic elements are making autonomous vehicles low-priced, slighter and more proficient. The Kiva Warehouse Automation system establishes a new model for pick-pack-and –transport warehouses that significantly improves worker productivity (Wurman, 2009). The Kiva warehouse system uses portable storing tables that can be elevated by minor, autonomous machines. By bringing the product to the worker, productivity is increased by a factor of two or more, while simultaneously improving accountability and flexibility. A kiva connection for a large delivery midpoint may want over five hundred vehicle. This thus implies that the Kiva system signifies the principal commercially obtainable, all-encompassing autonomous robot structures. It is noted that the first permanent setting up of a Kiva system was organized in the summertime of 2006.
Studies reveal that Kiva Warehouse automation systems publicized obtainability of an automated material handling system targeted at pick-pack-and –transport warehouses (Wurman, 2009). Therefore, the main innovation in the Kiva warehouse automation systems is the presentation of low-priced robots proficient in lifting and carrying three foot square shelving entities, referred to as inventory pods. The robots, named drive divisions, transport the inventory cases from stowage areas to locations where employees can take objects off the tables and put them into shipment boxes. All through the day, the picker remains in the location whereas a proceeding tributary of robots grants pick surfaces. Through shifting the inventory to the employee, instead of the employee moving to where the inventory is has resulted into employee productivity at least double. These outcomes have been born out in pilot ventures and at a stable installation that went online during the 2006 summertime.
Unlike other commercial robotic application systems such as Multi-Vehicle Systems, which comprise of a minority of robots per placement, a distinctive fitting of a Kiva system in a huge warehouse will include numerous robots. Accordingly, field conditions for Multi-Vehicle systems can be categorized in a range of magnitudes. The great case of an unidentified environment is an earthly exploration. Search and save situations and landmine discovery pose such challenging environment that robots to carry out these errands are still a long way from being cost effective (Tarek, 2012). In contrast, the Kiva drive units function in a controlled, known environment which greatly shortens the project problem and makes the solution useful. The business case for connecting a Kiva system usually projects a one to three year profit on investment. Accordingly, despite the fact that other systems are compliant, the Kiva robots are fundamentally self-determining. No robot depends upon any other robot to accomplish its duty, even though the system requires them all to succeed in order to complete a customer order.
The drive components are less ample to fit underneath the inventory case, and are outfitted with a mechanical lifting mechanism that permits them to lift pods off the ground. The pods comprise of a heap of platters, each of which is partitioned into containers. A variety of tray sizes and container sizes establishes the mixture of storage localities for the profile of products of the warehouse stores. Typically, a Kiva warehouse automation system installation is organized on a network with storage sectors at the central and inventory locations spread around the edge. The drive units are used to move the inventory pods with the correct containers from their storage places to the inventory where a pick employee removes the desired products from the container. It is significant to note that the pod has four surfaces, and the drive division may require to rotate the pod so that it can present the right face. Therefore, once a worker has accomplished his or her job with a pod, the drive division stocks in an empty stowage site.
Every site is equipped with a desktop computer that controls pick lights, barcode scanners, as well as laser pointers that are employed to identify the pick and put sites. For the reason that every invention is perused in and out of the system, complete selection faults decline, which possibly eradicates the necessity for post collecting superiority control. Generally, each site is proficient of being either a picking site or a renewal site. Virtually, pick sites will be placed near outbound conveyors and renewal sites will be placed close to pallet drop off points. The power of the Kiva solution emanates from the fact that it permits every employee to have unsystematic entree to any inventory in the warehouse. Furthermore, inventory can be recovered in comparable. This implies that as the picker is filling various boxes at the same time, the parallel, random access ensures that she is nit waiting on pods to arrive. Indeed, through maintaining a small queue of work at the station, the Kiva system supplies a fresh pod surface every five seconds, which sets a standard picking percentage of six hundred lines per hour. On the other hand, top proportions can be more than five hundred positions per hour when the worker can gather more than one object off a case.
There are noteworthy benefits that can be enjoyed from this system. For instance, there is greater accountability as each order is filled fully through a single person, improving accuracy and accountability through the reduction of number of touches on the product. Accordingly, there is no downstream dependencies; this implies that no single employee`s productivity relies on the performance of employees earlier in a sequential procedure. Rather, every employee`s location is comprehensive and self-contained. There is also no consignment dispensation; in a Kiva warehouse, the whole lot is accomplished immediately. This means that the customer`s order can be filled very fast, that is within minutes of being received. The systems renewal procedure is greatly streamlined because the site for renewal is free and any station can be used to put the product away (Wellman, 2007). Accordingly, the system is quite effective and there is no particular fact of disappointment. Unlike a transporter, in case a drive unit end from functioning, it does not interfere with the entire floor. The rest of the system proceeds to function, and most likely there is no noticeable effect on productivity. The system also experiences quick positioning; this is because there is no static substructure; a fifty point warehouse can be accepted online in a matter of days as a substitute of months. Kiva has developed a lesser, two location systems in a day. The system has spatial flexibility; this means that it can accommodate poles, flow into numerous rooms, and handle other Eccentricities of the environment. Through incorporating automated lifts, a Kiva installation can use entresols to seal the perpendicular space. In case there is need for expansion, a warehouse may simply add more pods, drives as well as stations. The simplicity with which warehouses can be conveyed virtually and extended implies that managers do not need to buy automation with the capacity to handle the volume forecast for five years out. On the contrary, they only require a huge structure, and they can acquire the Kiva constituents to deal with the enlargement as it transpires.
The Kiva warehouse automation system is extremely influenced by IA methods though several of the techniques employed are textbook applications of well-recognized systems. Accordingly, the software design replicates the point that the Kiva system is a multi-agent system because of its nature (Hompel, 2006). This is because every drive unit and every station is a computational gadget that can receive requests and act on them. Similarly, the system represents a huge and resource distribution difficult. In this case, the possessions encompass shelf galaxy at the location, drive divisions, stowage cases, as well as inventory and corporeal space.
To this end, it has
been noted that the Kiva warehouse automation system has been well welcomed by
the marketplace. Therefore, as this approach grows through the industry, there
are likely to be various interesting new computational and organizational
problems that require solutions. Kiva system has provided a great opportunity
to researcher or engineers to work on systems including various robots to be
very energizing and it is hoped that other people will be motivated to work on
Hompel, M. (2006).Warehouse Management: Automation and Organization of Warehouse and Order Picking Systems. Boston, CA: Springer.
Tarek, M. (2012). Prototyping of Robotic Systems: Applications of Design and Implementation. New York, NY: IGI Global Snippet.
Wellman, M. P. (2007). Economic principles of multi-agent systems. Artificial Intelligence. 94(1):1–6.
Wurman, P. (2009). Coordinating Hundreds of Cooperative, Autonomous Vehicles in Warehouses. Woburn, MA: Sage.
A Speech on Corporate Social Responsibility
I stand in front of you to talk about corporate Social Responsibility. My research study involved looking at the Social Responsibility for the Sony Corporation and then comparing it with Corporate social responsibility within the construction industry as my poster presentation shows.
I will start with a quote by Pierce and Jameson, where they stated that:
“If we are to create a sustainable world – one in which we are accountable to the needs of all future generation and all living creatures – we must recognize that our present forms of agriculture, architecture, engineering, and technology are deeply flawed. To create a sustainable world we must transform these practices. We must infuse the design of products, buildings, and landscapes with a rich and detailed understanding of ecology” (Pierce and Jameson, 2004).
As stated earlier, our research concentrated on identifying the steps Sony Corporation has undertaken towards social corporate responsibility in a bid to try and recommend the same for the construction industry. Our findings are that Sony Corporation has an extensive social responsibility framework which involves compliance with social responsibility principles, ensuring they produce products that are eco-friendly, responsible sourcing, training and development programs for their employees, and highly advanced technology arising from increased creativity and innovation. In the past, Sony Corporation has engaged in community programs such as support of relief for efforts for victims of Tsunamis, hurricanes, earthquakes, wild fires and other natural disasters. It has also undertaken activities to support arts, disadvantaged youths, and protection of environment. They also have a program to ensure employment of disabled persons, and strengthening women in their careers.
In regard to Corporate social Responsibility for the construction company, this sector has been named as the most important and the greatest contributor to economic development. It has also employed the largest number of people in the labor market. However, just as it has many positive contributions, it has as much negative contributions especially in regard to the environment and welfare of the community. Research has shown that the construction industry has very adverse effects to the environment including pollution of environment due to the many gases emitted during construction, land degradation due to mining and other activities, consumption of enormous levels of energy, and loss of forests and many natural habitats. Its employees are also faced with many health issues and unfavorable working conditions. As a result of these factors, the construction industry should take lead in implementing social responsibility principles which would reduce all these effects to the community and contribute to sustainable development. Some of the steps they can undertake include adopting energy efficient mechanisms, recycling and use of eco-materials, providing mentorship programs to their employees, etc. this will lead to a healthier community and the organizations will be able to reap from the many benefits of CSR listed earlier.
The Injection of CO2 in Displacement of Reservoir Oil
To improve extraction of oil, there are techniques used to enhance oil recovery in reservoirs. Gas introduction is the most commonly employed technique, whereby carbon dioxide or nitrogen is introduced into the basin to expand and push extra oil to the nearest fabrication well bore. The injected CO2 lowers the thickness oil and thus progresses its gush velocity. The other important thing is that the injected carbon dioxide swells the crude oil and lowers the interfacial tension between the oil and CO2, and oil phase in the near miscible areas.
The Problems in Inoculation of CO2 in Disarticulation of Basin-Oil
One of the main challenges in the introduction of CO2 concerns basin force required to uphold the produced miscibility. This is because pressure range from about 1150 psi for the carbon dioxide to about 4900 psi for the augmented pressure (Satter 465). In essence, the core issue related to hydro-carbonation miscible flooding concerns poor vertical and horizontal sweep effectiveness based on viscous fingering. In this regard, large volumes of carbon dioxide are expensive and required greatly, but the gas solvent may be trapped inside and prove difficult to or impossible to recover (recycle to reduce costs). It is also significant to point out that huge outlays are sustained in acquiring carbon dioxide, but the end result of augmenting oil recovery causes a problem. This implies that there is relatively marginal contribution in regards to total oil recovered in countries like Canada and the U.S. In contemporary terms, CO2 offers about 279000 containers of oil every day depicting about 4.9 percent of the overall US rudimentary oil fabrication (Rafiqul 187). Recent carbon dioxide flooding has been so expertly and financially attractive that carbon dioxide supply rather than price has been the constraining developmental aspect. In accord, CO2 flooding is carried out by introducing huge quantities of CO2 up to fifteen percent of the hydrocarbon pore amount into the reservoir. In typical terms, it takes about ten Mcf of carbon dioxide to recover a single incremental barrel of oil.
An early breakthrough of carbon dioxide leads to numerous issues that comprise corrosion in producing wells. They carbon dioxide presence also poses the issue of separating it from saleable hydrocarbon, which calls for repressing of carbon dioxide for recycling. The added major problem involves augmented obligation of CO2 in each incremental barrel produced. There is a huge problem in regard to the method that can be used to overcome carbon dioxide miscible procedure constraint and functional issues (sweep effectiveness, unfavorable injectivity profiles, augmented ratios of carbon dioxide to oil produced, as well as gravity override) (Islam 300). It is difficult to recognize and implement affordable and effective carbon dioxide thickeners that would have allowed augment in the viscosity of carbon dioxide. This is also as a result of lack of precision to allow careful regulation of the augment in viscosity of carbon dioxide. Moreover, the possible alternative to this is carbon dioxide soluble thickener composed of brine and crude oil insoluble. However, this presents a challenge since it is bound to inhibit partitioning into other fluid phases and its subsequent absorption onto the reservoir rock. It is also challenging to have an efficient blueprint of carbon dioxide foams for mobility decrease, especially for high temperature reservoirs where chemical degradation of surfactants is of great concern.
The Displacement Differences in Injection of CO2 in Displacement of Reservoir Oil
The technique of gas
injection displacement comprises miscible flooding, oil volume swelling, and viscosity
reduction. Therefore, preliminary reservoir pressure of a sub reservoir is lower
than the minimum miscible pressure. This is in regard to related gas injection
and that is augmented than minimum miscible pressure as a result of carbon dioxide
(CO2) injection. This implies that the dislocation technique of
related chatter inoculation in a sub reservoir is immiscible deluging and that
of CO2 inoculation is ‘miscible
flooding.’ CO2-water alternate ‘miscible flooding’ is the most efficient manner or technique to
enhance revitalization (Dipietro et al 48). At this point, its recovery is
augmented than water flooding, carbon dioxide CO2 miscible flooding,
and related gas-water alternative immiscible flooding. Hence, related gas water
alternative immiscible flooding can also enhance recovery since gas injection
has the impact of distension and it also depicts viscosity lessening as a result
of this process. The other thing concerns the recovery of carbon dioxide, CO2,
and this occurs when water alternate miscible flooding after flooding is
fundamentally the same with merely carbon dioxide, CO2 wateralternative
miscible flooding carbon dioxide- water alternative miscible flooding. This
occurs following related gas- water immiscible flooding that has no apparent impact
in enhancing recovery. As such, this is
centered on the reason that mobility of related gas is augmented in sufficient
levels to result in finger advance and hence form preponderance flow path. The resultant
injected carbon dioxide flows only along this flow path and it would not sweep
more residual oil.
Dipietro et al, “The Role of Naturally-Occurring CO2 Deposits in the Emergence of CO2 Enhanced Oil Recovery.” Retrieved from: http://co2conference.net/pdf/1.2-Slides_DiPietroCO2Sources2011-CO2FloodingConf.
Islam, Rafiq. Greening of Petroleum Operations: The Science of Sustainable Energy Production. New York, NY: John Wiley & Sons, 2011.
Rafiqul, Islam. Unconventional Gas Reservoirs: Evaluation, Appraisal, and Development. Boston: Elsevier, 2014.
Satter, Abdus. Practical Enhanced Reservoir Engineering: Assisted with Simulation Software. Washington, DC: PennWell Books, 2008.
The Blood Gas Analyzer
The blood gas analyzer is a device that is used to measure pH, blood gas, electrolytes, and some metabolic processes in the blood. Blood analyzers can record the partial pressure of oxygen and carbon dioxide, pH, and the levels of various ions such as bicarbonates, chloride, potassium, and sodium. Besides, the device can also measure metabolites such as lactate, glucose, magnesium, and calcium (World Health Organization 1). Blood analyzers can also identify errors in metabolites, electrolytes levels, the amount of oxygen/carbon dioxide exchange, and acid-base balance. The device can be in the form of a bench-top or a hand-held device. It has an LCD screen, a keypad for entering information, and a slot through which samples are inserted. Some blood gas analyzers come with additional features such as touch-pens, alarms, USB ports, memory functions, and storage compartments (World Health Organization 1).
Blood gas analyzers are fitted with electrodes that measure the partial pressure of oxygen and carbon dioxide in the blood, and the pH. Analyzers that determine blood chemistry are fitted with a pad that contains the reagent needed for a specific test. Analyzers that measure electrolytes in the blood use the ion-selective electrode method that measures the activities of the ions in the solution (World Health Organization 1). The first step in using a blood analyzer is placing the blood samples in test tubes and placing them on the analyzer. The operator can then determine the type of test to be performed from the keypad or a computer connected to the device. The analyzers then channel the blood into a measuring chamber that has specific electrodes that capture the variables being measured (World Health Organization 1). For instance, electrodes sensitive to pH work by comparing the electric potential registered at their tip with a set potential, and the difference is the measure of the concentration of hydrogen ions H+. Electrodes sensitive to carbon dioxide are fitted with a semipermeable membrane made of silicon or Teflon that covers their tip (Sood, Paul, and Puri 57). At the point where the electrode and the membrane meet, a reaction takes place whereby water molecules combine with carbon dioxide molecules, releasing hydrogen ions that are equal in pressure to the carbon dioxide molecules. The voltmeter normally measures H+, but it is calibrated with the pCO2 symbol.
In electrodes that measure oxygen, the gas passes through a membrane made of polypropylene and reacts with a buffer made of phosphate molecules. Water molecules react with oxygen molecules in the phosphate buffer, and this produces an electric current that is proportional to the oxygen molecules (Sood, Paul, and Puri 57). Blood gas analyzers then measure the current and convey the results in form of the initial oxygen pressure. After the measurements are recorded, the blood that remains in the equipment is removed as waste, and the measuring apparatus is cleaned in readiness for the next test. The units of measure used in blood gas analyzers include kilo Pascals (kPa) and millimeters of mercury (mmHg) (Sood, Paul and Puri 58).
The blood gas analyzer is an important equipment in monitoring the acid-base balance in patients, the efficiency of gas exchange, and the status of their respiratory system. Some of the patients that are monitored using gas analyzers include patients that have just left surgery, patients on oxygen therapy, patients in intensive care, and patents with diseases such as diabetes, cardiovascular malfunctions, and kidney disorders (Sood, Paul, and Puri 58).
The human body functions within a narrow optimum alkaline environment with normal pH falling between 7.35-7.45, and maintaining an optimum physiological function requires the maintenance of a normal pH. The two key processes that maintain pH in the body are metabolic and respiratory functions. Blood pH is considered low if it is below 7.35, and this means that the blood is acidic (Uyanki, Sertoglu and Kayadibi 104). If the pH is above 7.45, the blood is considered alkalotic. Respiration interferes with blood pH by releasing CO2 that is a by-product of metabolism into the blood stream. The CO2 is transported to the lungs where it is eliminated through breathing (Singh, Khatana, and Gupta 136). However, the excessive carbon dioxide is absorbed in water to form carbonic acid. Blood pH will change depending on the amount of carbonic acid in the blood, and consequently, the rate and depth of breathing also change. Carbon dioxide is considered a respiratory acid because as the blood pH declines as CO2 is eliminated, and if the blood pH goes up CO2 is retained (Singh, Khatana, and Gupta 136). The renal system is another metabolic function that affects blood pH. The kidneys discharge hydrogen ions and retake bicarbonates. Bicarbonates are a product of metabolism and are considered as alkaline. As the blood pH declines, it becomes acidic, and the body responds by retaining bicarbonates. On the contrary, if the blood pH goes up, the blood becomes alkaline, and the body responds by eliminating bicarbonates through urine (Singh, Khatana, and Gupta 136).
The blood gas analyzers used in modern hospitals are a product of several years of gradual improvement. The basic technology used in blood analyzers is the same, but their size has reduced significantly. Nevertheless, the reduction in the size of analyzers has presented a challenge, which is to fit sensors in the reduced analyzer sizes and design (Singh, Khatana, and Gupta 137). The pioneering blood gas analyzers used pH electrode that was developed at the start of the 19th century. Scientists had discovered that when a fine glass membrane is placed in between two solutions with different pH, a small difference in terms of electrical potential developed. Scientists faced a challenge in measuring the difference in electrical potential, but with the development of electrical devices, instruments that could measure pH were made (Singh, Khatana, and Gupta 137).
The second generation of blood gas analyzers was developed fifty years after the pioneering one. Second generation blood analyzers used pCO2 sensors that were covered with plastic membranes. The membrane covered a mercury/mercury chloride reference electrode, but later, a bicarbonate ion was added to the electrolyte solution (Dev, Hillmer, and Ferri 7). This enabled an association of the partial carbon dioxide pressure outside the membrane with pH through a basic mathematical equation that is referred to as the Henderson-Hasselbalch equation. The partial oxygen pressure sensor was developed and integrated into the analyzer to form an integrated instrument. The partial oxygen pressure sensor was an electrode covered by a membrane (Dev, Hillmer, and Ferri 7).
Despite the improvement in the design and accuracy of blood gas analyzers over the years, scientist still face challenges in designing the device to-date. One of the challenges faced in designing the instrument is comprehensive understanding of the electrochemistry and physical chemistry used in the sensors (Singh, Khatana, and Gupta 138). A second challenge is the selection of the best materials that provide the best sensor functions. Lastly, a physical method of evaluating the property of materials such as chemical impurities and surface texture used to make gas analyzers is still missing. Solving these challenges is more important today because of the trend to create smaller devices. Scientists hope that smaller gas analyzers will lead to an improved analysis (Singh, Khatana, and Gupta 138).
Most gas analyzers utilize several sensors that are placed in a complex flow of cells. The time it takes for the gases and liquids to flow through the cells is lengthy, and this sometimes leads to an incomprehensible measurement cycle. On the contrary, the recording of the sensor signals needed to produce the result takes place during a very small window, for example, a fraction of a second (Singh, Khatana, and Gupta 138). The device uses the remaining time in the reading cycle to identify a suitable starting point for the next reading. This means that the time that is allocated for the sensors to respond is very small, and the solution is having faster sensors. Ideally, the sensor should be very fast to enable them to attain an equilibrium within the time the signal is acquired. In reality, it is difficult to create faster sensors, and even faster sensors sometimes change dismally when acquiring signals (Singh, Khatana, and Gupta 139).
Dealing with the above challenge means that medical practitioners will have to determine the end points of the reactions, that is, predict what the responses/reactions would have been if they were allowed to proceed to completion. The prediction should be made based on mathematical formula suitable for the sensor used; however, in some cases, empirical predictions can be made (Singh, Khatana, and Gupta 138). Improving the efficacy and speed of blood gas analyzers can also be achieved through selecting the most suitable material for the membrane that measures partial carbon dioxide. In the previous years, the material used to make partial carbon dioxide membrane was polypropylene, Teflon or polyethylene. Blood gas analyzers that had membranes comprising materials as mentioned earlier took tens of seconds before responding. This means that the first generation blood gas analyzers were very slow (Singh, Khatana, and Gupta 138).
Technological advances have permitted the development of hand-held gas analyzers. Portable analyzers utilize analytical cartridges that can be disposed of, and they are fitted with optical sensors in some instances. This means that a nurse or physician without technological certification can use hand-held devices. Some hand-held blood gas analyzers are considered less accurate because they are not calibrated, and this increases the cost of analysis (Sood, Paul, and Puri 58).
Blood gas analyzers are vital tools that enable medical practitioners to examine the acid-base balance in patients in order to evaluate how respiratory oxygen is used. The device measures partial oxygen pressure, partial carbon dioxide pressure, and pH using electrodes. The electrode used to measure partial carbon dioxide pressure and pH are potentiometric; that is, the voltage is proportional to CO2 and hydrogen ions produced (Dev, Hillmer, and Ferri 8). Examining patients using blood analyzers has some advantages for medical practitioners. These advantages include helping in making a diagnosis; structuring the treatment plan; assisting managing ventilators; and enhancing acid-base management (Dev, Hillmer, and Ferri 8).
The accuracy of the readings obtained from blood gas analyzers is determined by the how the blood sample was collected, prepared, and analyzed. Experts have noted that clinically significant errors can occur in any of the steps above and interfere with the accuracy of the results (Dev, Hillmer, and Ferri 9). Some of the most common errors include bubbles in the specimen, blood samples not obtained from arteries, and inadequate or too much anticoagulant in the specimen. Errors that are likely to occur before a sample is obtained from a patient include wrong/missing patient identification, the use of incorrect anticoagulant, failure to stabilize the condition of the patient before taking samples, and poor cleaning of the apparatus before taking a sample (Dev, Hillmer, and Ferri 9).
Medical practitioners can also make errors when handling specimen. Some of the errors they make include mixing venous and arterial blood and failing to mix blood specimen with heparin properly. Errors can also occur during the storage process, and some of the errors that can occur include the destruction of the red blood cells and poor storage (Sood, Paul, and Puri 59). If medical practitioners adhere to the proper procedure when using blood gas analyzers, then they are more likely to end up with accurate results. However, medical practitioners are cautioned that they should always take the relevant clinical history of a patient before administering a blood analyzer test (Sood, Paul, and Puri 59). For example, patients suffering from renal failure, hypotension, and diabetes are more likely to have acidic blood. On the other hand, patients suffering from nausea or patients that use diuretics are more likely to have alkaline blood. Medical practitioners using blood gas analyzers are advised to observe standard precaution when dealing with body fluids (Sood, Paul, and Puri 60).
The blood gas analyzer gives a record of oxygen and carbon dioxide levels, which assists medical practitioners in determining whether the kidneys and lungs are functioning effectively. Blood gas analyzers determine the management approach that can be used in dealing with patients with acute diseases (Singh, Khatana, and Gupta 139). Some of the symptoms medical practitioners look for in a patient before performing a blood gas analyzer test include nausea, confusion, breathing problems, and shortness of breath. In most cases, blood analyzer test is prescribed to patients suspected of suffering from neck or head trauma with breathing problems; patients with metabolic diseases, lung problems, and kidney disease (Singh, Khatana, and Gupta 139).
Just like other medical tests and treatments, blood gas analyzers may have some side effects. However, blood analyzer tests are generally considered to be low risk because of the small amount of blood used. Some of the possible side effects of undergoing a blood gas analyzer test include dizziness, infections, bleeding, and blood accumulating under the skin (Singh, Khatana, and Gupta 139). In the future, gas analyzers are likely to improve patient outcomes and assist medical practitioners in providing a speedy treatment. In addition, experts have predicted that in the future, blood analyzers will be used in diagnosing a range of medical conditions. The testing methods that will be included in the gas analyzer in the future are also likely to enhance the manner in which the tests are conducted and the outcomes (Singh, Khatana, and Gupta 140).
Advancement in technology will also ensure that little training is needed for practitioners to use blood gas analyzers. Nevertheless, the current analyzers are less cumbersome, and they need less maintenance and troubleshooting. Moreover, changing electrodes in the device has become easier, unlike in the earlier versions (Singh, Khatana, and Gupta 140). Modern blood gas analyzers also have inbuilt quality assurance systems unlike in the earlier version that required a practitioner to perform quality control checks after every eight hours manually. For instance, modern blood gas analyzers have the ability to detect clots and remove them from the system automatically. In addition, current analyzers are highly automated, especially when it comes to sampling (Singh, Khatana, and Gupta 140).
Modern hand-held analyzers also have several benefits that include the ease-of-use and faster delivery of results and this makes hand-held devices convenient. Consequently, most medical practitioners prefer hand-held devices. Hand-held analyzers have the benefit of transmitting results automatically and in a secure manner, and this lowers the chances of transcription errors. Cost is another reason why hospitals are opting for hand-held devices. For instance, tabletop blood gas analyzers can cost between $30,000 and $100,000, but hand-held devices can cost about $7,000 (Singh, Khatana, and Gupta 141).
Experts have predicted that in the future, blood gas analyzers are more likely to be integrated with other procedure, for example, pulse oximetry therapy. Integration has some advantages; for instance, gas analyzers examine the specimen at one moment in time, while oximetry therapy has the ability to provide continuous monitoring. Some manufacturers have already developed wireless blood gas analyzer systems (Singh, Khatana, and Gupta 141). The wireless connection enables faster transfer of information, and this means that information is available whenever needed by medical practitioners. Another feature that is likely to be included in future models of gas analyzers is the broad test menu. Researchers are interested in testing several variables when performing a blood gas analysis. The variables that are likely to be included in the future are the levels of lactate, blood urea nitrogen, and creatinine (Dev, Hillmer, and Ferri 9).
Future blood gas
analyzers are also likely to have a compact design, and this will prove crucial
is saving space in hospitals. Laboratory practitioners are looking for blood
gas analyzers that are easy to use in terms of usage instruction and data
entry. Practitioners also prefer analyzers with a small sample capacity because
obtaining enough blood from patients can be challenging, especially when
dealing with infants (Dev, Hillmer, and Ferri 9).
Most medical practitioners want microliter-size gas analyzers as opposed to the
current milliliter-sized analyzers. Hospital administrators are interested in
comparing results from several analyzers in different places. However, this is
made impossible by lack of standard workflow procedures and methods of ensuring
quality. As a result, hospital administrators are advocating for increased
standardization of gas analyzers (Dev, Hillmer,
and Ferri 9). Other special features that some manufacturers are
planning to integrate into their devices include biometric identification. The
current devices uses passwords for security and this can be easily manipulated.
The biometric features that will be used for identification in the future
models include thumb, retina, and voice (Dev,
Hillmer, and Ferri 9). In summary, the technological improvement of
blood gas analyzers will provide crucial information on metabolic diseases and
respiratory problems in the future.
Dev, Shelly P., Melinda D. Hillmer and Mauricio Ferri. “Arterial Puncture for Blood Gas Analysis.” The New England Journal of Medicine (2011): 7-9. Print.
Singh, Virendra, Shruti Khatana and Pranav Gupta. “Blood gas analysis for bedside diagnosis.” Natl J Maxillofac Surg. 4. 2(2013): 136–141 . Print.
Sood, Pramod, Gunchan Paul and Sandeep Puri. “Interpretation of arterial blood gas.” Indian J Crit Care Med. 14. 2 (2010): 57–64 . Print.
Uyanki, Metin, Erdim Sertoglu and Huseyin Kayadibi. “Comparison of blood gas, electrolyte and metabolite results measured with two different blood gas analyzers and a core laboratory analyzer .” Scandinavian Journal of Clinical & Laboratory Investigation. 75 (2015): 97-105. Print.
World Health Organization. “Blood Gas/pH/Chemistry Point of Care Analyzer.” 2011. Print.
The investment casting process is a sophisticated process of producing metal castings that dates back to 4000 B.C. Investment casting is one of the oldest metal casting techniques and also one of the most advanced. The method was used around the world in Africa, Egypt, China and in Mexico 4000 B.C. when it was used by artisans and sculptors to create statues, jewelry and idols (Krar & Bill, 2003). The modern industry ignored investment casting until it was back by dental professionals to create teeth crowns. The term investment casting comes from the use of slurry or investment to form an extremely smooth surface with a tolerance for high dimension. During WWII, the technology was brought to the forefront when quickly producing precision parts became necessary. The process provided a shortcut for manufacturers for producing complex parts that were challenging to create using alternative methods.
Today, investment casting is used to produce automotive, aircraft, power and hand tool components, and other recreational products including the head of a golf club (Garg, 2005). Civilian and military jets rely on investment casting for the manufacture of engines and other parts. For automotives, investment casting is used to produce gears, splines, valve and fitting, and levers.
Proper handling of pattern wax during the pre-pattern production can eliminate a number of wax pattern defects. All materials used in the production of a casting are part of a system and thus must work together to ensure production of quality casting. The final casting is as good as the wax pattern produced (Beeley et al., 2008). Pattern wax is composed of products such as organic fillers, synthetic and natural resins, natural waxes and petroleum waxes. Microcystalline and paraffin waxes are produced from the distillation of crude oil. Paraffin is the commonly used petroleum wax because it is cheaper than other raw materials. Additionally, paraffin wax controls and enhances the rheological properties. In turn, the enhanced rheological properties affect the injection temperature. The fluidity of the pattern wax mix is also affected by the enhanced rheological properties.
Resins are extracted from natural sources such as crude oil, coal tar and pine trees. Resins can also be produced synthetically. They are used to add body during the formulation and as a result they affect tackiness, hardness, rigidity and shrinkage of the wax blend (Beeley, et al., 2008). On the other hand, candelilla and carnauba waxes are the natural waxes used. They are derived from shrubs and leaves in Mexico and Brazil. Natural waxes affect the set up properties, surface finish and hardness of the pattern wax blend. Sometimes, synthetic additives are used in the natural wax formulation. Synthetic products are more stable and reliable compared to natural raw materials. Fillers are also important to the development of the pattern wax. The fillers are selected according to these criteria: low ash content, organic, fine particle distribution and relative high melting point. Hydro-fill, Bisphenol A, Polystyrene and Isopthalic are the commonly used organic fillers (Beeley et al., 2008).
Soluble waxes are comprised of three main raw materials which include: Effervescing carbonate, filler and binder. The binder also known as PEG is available in various molecular weights and is used in different combinations to achieve the preferred melt point, hardness and viscosity characteristics. PEG consists of a fine powder substance that is commonly used to improve shrinking characteristics. The filler also helps improve the structural strength of the wax blend. To improve the elastic properties and strength of the wax, fibrous materials are used. The sodium bicarbonate is used as an effervescing agent to help break down the soluble wax during the discharge process. Sodium bicarbonate also adds to the bulk of the wax. Both the sodium bicarbonate and the fillers are inorganic materials thus foundries must adhere to recommended handling practices of proper heating.
Material suitable for investment casting
Both non-ferrous and ferrous materials can be investment cast. Metals considered for investment casting can be melted in a vacuum furnace or a regular furnace. Materials that are difficult to produce using a machine are also recommended for investment casting. Castability rating, shrinkage, fluidity and resistance to hot tearing are the main properties considered before selecting a metal for investment casting. In regard to ferrous metals, ductile iron and steel alloys are the most commonly poured. On the other hand, non-ferrous metals include copper based metals, magnesium and iron with aluminum being the most popular non-ferrous metal.
Aluminum alloys are expected to have a density of 2.7g/cm3 with the exception of A07130, A07120 which have a density of 2.8g/cm3. A05350 is also an exception with a density of 2.6g/cm3. All alloys of aluminum are hardenable with the exception of A05140 and A05350. A02010 has copper as the main alloying element. Therefore, it is a strong alloy with a relative weldability and excellent machinability. A33550 is a premium quality aluminum with cooper and silicon as it’s alloying elements. The alloy offers good castability, machinability and weldability. On the other hand, A 356 has poor brazability and good weld-ability. Its main alloying elements are silicon, magnesium and/or cooper.
Carbon steel alloys have poor resistance to corrosion and fairly good machinability. All alloys of carbon steel have a density of 7.8 g/cm3 and are therefore hardenable. Although 1010 and 1020 are hardenable, they have a density of 7.9 g/cm3. 1040 offers a poor resistance to corrosion, good weldability and medium strength. On the other hand, 1050 offers good machinability and medium strength. Both 1040 and 1050 have fairly good fluidity, resistance to hot tearing and shrinkage. As pertains castability, both 1040 and 1050 rate as good.
Up to 8700c, Cobalt 6 is oxidation resistant with a good castability that is coupled with resistance to corrosion. On the other hand, Cobalt 12 has a high resistance to corrosion and boast excellent wear properties. Both Cobalt 21 and Cobalt 31 have excellent shrinkage and fluidity, good resistance to hot corrosion and very good castability rating. The alloy of Monel provides good resistance to corrosion at both high and low temperatures. Monel 4020 has excellent fluidity. Good resistance to tearing and shrinkage with very good castability. Iconel 600 is resistant to corrosion but in the presence of Sulphur. Iconel also offers good machinability.
Copper-based alloys have a density of 8.3g/cm3 thus are not hardenable. The exception to the density rule is Navy G and Phosphor Bronze with a density of 8.8g/cm3 and 8.7g/cm3 respectively. Ductile irons have good machinability and poor resistance to corrosion. Tool steel alloys have a density of 7.8g/cm3 and are all hardenable. Among the tool steel alloys, A-2 has a good resistance to wear, resistance to corrosion under high temperature, good machinability, weldability and castability. On the other hand, H-13 has good resistance to corrosion, fair resistance to tear and poor toughness. H-13 is machine-able, cast-able and weld-able.
Only two forms of investment casting are commonly known and they are ceramic shell and solid mold. The two types of investment casting differ based on the way the mold is formed. For the solid mold process, the mold pattern is placed in a container and the mold material poured around the pattern. During the ceramic shell mold, the shell is dipped into mobile slurry. Eventually the pattern is taken out of the slurry and span around to ensure uniform coating. The coating is then allowed to dry and the dipping process is repeated to achieve the desires thickness. The mold is then exposed to heat to drain the pattern wax leaving a hollow cavity.
Investment casting begins with preparation of wax patterns for the casting. One or more patterns can be attached to the sprue depending on the complexity and size of the cast. The ceramic pour cup is attached at the end of the sprue bar (Krar & Bill, 2003). This arrangement of wax patterns is known as a tree because the casting patterns on the sprue are similar to the branches of a tree. The wax pattern is then dipped into slurry comprising of binders, silica and water until then desired thickness is achieved. Once the required hardness has been acquired, the refractory coat is left in dry in air so as to harden.
The next step is key to investment casting. After hardening of the ceramic mold, it is turned upside down with the funnel side facing down. The hardened mold is then heated to a temperature of 900C to 1750C (National Conference, 2003). This causes the wax that is inside to melt leaving a cavity for the investment casting. The ceramic mold unlike the wax dose not melt under severe heat. The mold is then heated further with the temperatures raised to between 5500C-1100C. Further heating of the mold makes it stronger while removing leftover wax. Metal casting is poured while the mold is hot. This allows the molten metal to flow without difficulty within the mold cavity. Pouring the molten metal into a hot mold gives better dimensional precision since both the mold and the cast will shrink at the same time. Finally, the ceramic mold is broken after the molten metal solidifies inside the ceramic mold. The end product of the investment casting process is the cast.
Investment casting is one of the oldest and most sophisticated metal casting processes. The process is suitable for mass production and for the production of complex metallic parts that would be impossible to manufacture using normal processes. Investment casting was initially used by ancient artisans and sculptors. It was later rediscovered by dentist professions. Today, the process is used to produce components for air crafts, hand and power tools, automotive and recreational products. A good cast depends on the quality of the casting process right from the formulation of the pattern wax to the casting itself. All processes during casting are part of a system thus they must work together to ensure that a quality cast is produced. On the other hand, not all metals can be cast and those that can be cast have specific properties that make them suitable. These properties include: fluidity, castability, shrinkage and resistance to casting.
Beeley, P. R., Smart, R. F., & Institute of Materials (Great Britain). (2008). Investment casting. Leeds, UK: Maney Publishing.
Garg, S. K. (2005). Comprehensive workshop technology: Manufacturing processes.
Krar, S. F., & Gill, A. R. (2003). Exploring advanced manufacturing technologies. New York, NY: Industrial Press.
Top of Form
National Conference on Investment Casting, Mondal, B., & Central Mechanical Engineering Research Institute. (2004). Proceedings of the National Conference on Investment Casting: NCIC 2003. New Delhi: Allied Publishers.
Bottom of Form
Acoustic Gas Thermometry
In recent years, advancements in metrological technologies have led to the discovery of use of a variety of thermometry methods for different purposes. Methods such as thermodynamic gas thermometry, dielectric constant gas thermometry, black body radiation thermometry and acoustic gas thermometry are increasingly being used in the metrological sector and beyond with different objectives (Moldover et al 2016). Such approaches have been found suitable for the performance of various roles within the industry, most of which were only accomplished with lower efficiencies in the past. The modern methods are easier to use while also giving more accurate results that are reliable. For instance, reliability is determined through the comparison of results from different approaches to thermometry, and it most cases, it is realized that results obtained from two or more such modern methods are comparable. Dielectric constant gas thermometry and acoustic gas thermometry in particular, have obtained extensive use in areas where gases are involved. Especially where the objective is to determine either the molar gas constant or the Boltzmann’s constant.
Acoustic gas thermometry is applicable for a variety of roles in metrology. As such, its wide application makes it one of the most studied forms of modern thermometry as many authors find it necessary to compare the results from those obtained through other methods, formerly considered accurate. In the determination of Boltzmann’s constant, several authors apply acoustic gas thermometry as the standard of measurement. Similarly, the molar gas constant is more accurately defined through measurements obtained by acoustic gas thermometry. To understand this subject, the ensuing paper discusses various aspects in relation to acoustic gas thermometry. Its main objective is: to highlight the principles of using acoustic gas thermometry and its relevance to industry today. To accomplish this objective, the paper first highlights the principles of operation of the acoustic gas thermometers, stressing on their characteristics that enhance performance of the same.
Acoustic Gas Thermometry Operation basics
Moldover et al (2016) explore the operations of acoustic gas thermometry. According to their research, the use of various methods of thermometry has increased in recent years, leading to the advancement of capabilities within and above the moderate temperature ranges of 1 K to 1235 K. This is the same range within which acoustic gas thermometry finds its maximum application. This type of thermometry relies on the mechanical connections between the kinetic freedom degrees of an ideal gas under constant thermodynamic conditions. This is based on the relationship between Boltzmann’s constant Kb and the temperature of the gas, which influence the velocity of the gas molecules. Moldover and others depict this relationship as ½ m(v2) = 3/2Kb T. From this equation, m is the atomic mass while v2 is the room mean square velocity of the gas atom. T is the temperature of the gas and Kb is Boltzmann’s constant. From this equation, it is clear that various parameters can be obtained once some are available. For instance, for an atom of known molar mass at a given temperature, velocity measurements can help to determine Boltzmann’s constant. On the other hand, if Boltzmann’s constant is known but the molar mass is unknown, it is possible to determine the molar mass through acoustic gas thermometry. This equation thus forms the basis of most of the applications of acoustic gas thermometry and the principle from which other relationships are obtained.
In most cases, using acoustic gas thermometry is challenged due to the difficulty of achieving constant gas volumes. However, the volumes of the gases have no significant impacts on the results obtained unless there is need for comparative analysis. The primary approach to AGT involves determination of thermodynamic temperatures of mono-atomic gases such as helium, argon and neon based on the velocity of sound within an enclosed cavity. The applicability and accuracy of AGT in the determination of various gas characteristics informs its relevance as a standard for the same. Most of other methods for measuring parameters such as the Boltzmann’s gas constant and the molar gas constant have been found to be less accurate than AGT hence its preference for use as a standard. For the measurement to be successful, the gases used have to possess characteristic molar masses that can be approximated to the kilogram, meter and the second. The absolute acoustic gas temperature is thus set at approximately the triple point of water which is defined as Ttpw (Moldover et al 2014). The gas used in the thermometry must be inert since the characteristics of the gas may to some extent affect the results. This is the basis of the application of neon, helium and argon gases. In the use of AGT, the rationale for reliance on sound is the probability of linking the speed of sound and light to that of the gases in the closed cavities used in thermometry.
AGT thermometers measure the frequencies of varied acoustic resonances while at the same time determining the frequencies of several microwave resonances pertaining to a gas filled in a cavity. The cavities used are mostly metallic while the gas can be argon, helium and neon. The frequencies measured at a particular temperature are presented as a ratio which is comparable to the ratio of the speed of sound to that of light in air at the same temperature. The ratios form the basis of temperature determination in the application of the method. In such a case, the proportionality constant is influenced by the nature of the gas in the cavity and the shape of the cavity used. The measures obtained are compared to those of the triple point of water, which is pre-determined. The determination of the unknown temperature is based on a mathematical relation between the frequency ratios and the temperature rations, highlighted as:
(Tx/Ttpw)1/2 = (fa/fm)Tx/(fa/fm)Ttpw (Moldover et al 2016)
In the equation, the first term is the ratio of the unknown temperature to the temperature of the triple point of water. The second and the last terms are the frequency ratios at the unknown temperature and that at the triple point of water respectively. Since the frequency ratios at the triple point of water are known and those at the other temperature are known, the actual measurement temperature can be obtained by inserting all the values into their respective places before finding the unknown value. The frequency ratios are neither affected by the gas pressures nor by the cavity shape changes that may result from thermal expansion, but are affected by large changes resulting from actual cavity shape differences. The thermometers work within the range of 84 to 550K. It is thus important to note that the frequency values are affected by the gas pressure due to gas differences and the cavity shape differences.
Based on the dependence of the results on the shape of the cavity, various characteristics have to be considered prior to the selection of the cavity for AGT. For instance, the cavity used must possess high Q values. This helps in ensuring that the frequency of the gas movements within the cavity is not limited by various cavity designs. Furthermore, it should have non- overlapping resonances in the microwave and the acoustic regimes. The second quality is comprehensible based on the fact that the thermometry method depends on the comparison of microwave and acoustic frequencies. Overlapping resonances can result in measurement ambiguity and lack of accuracy in the frequency. Needless to say, inaccuracies in frequency measurements result in inaccuracies in all the other applications of the thermometer. The cavity used in most cases also has to be a quasi- sphere, which is constructed through connection of two, almost hemispherical cavities (Moldover et al 2016).
While the shape of the cavity is described as almost spherical, it is not randomly selected or created. On the contrary, the cavity shape is a result of in-depth engineering design, where it is approximated to be a tri-axial ellipsoid. The axis ratios for the cavity shape axes are to be 1: (1+e): (1-e). In this case, e is defined as the error in measurement and it is supposed to be greater than 0.0005 yet less than 0.001 (Moldover et al 2016). In either case, e approximation has to be significantly large to be capable of separating the different microwave frequencies. This implies that as much as error is minimized, there is a limit beyond which the minimization cannot occur, as further error reduction would result in the overlapping of microwave frequencies. At the same time, the acoustic frequencies may also be affected due to the same issue of overlapping. The minimum error should also be small enough, to the range of the error squared. This is to help in averting the imperfections of shape which have the potential of affecting the frequency approximations negatively. The main objective in doing this is to ensure that while conducting the thermometry measurements, the results obtained are accurate and usable for standard purposes. It is possible to avert any inaccuracies and unexpected outturns through proper planning and execution.
Specific applications of AGT for standards
Different acoustic cavity shapes can be used with AGT to obtain different frequency results. The formation of acoustic resonance relies on the shape of the cavity and can be helpful in the determination of various gas –related constants based on the measurements carried out in dilute gases. This is usually implemented by applying radially symmetric modes of operation (Feng et al undated). According to Feng et al (undated), the key challenge in determining the frequencies through AGT is in the determination of unperturbed frequencies when using perturbed measurements. Consequently, the objective of any such practice has to be to ensure that the measurements are as stable as possible. This is particularly important in particular applications of AGT such as the determination of Boltzmann’s constant, molar gas constant and different gas characteristics. All the measurements begin from the determination of the unknown temperatures as is the objective of every approach to thermometry.
AGT for Boltzmann’s constant determination
Feng et al (undated) describes the most common application of the principle of AGT. According to the authors, the Boltzmann’s constant can be determined through AGT by the use of fixed length cylindrical cavities instead of any other shape of cavities. The main rationale for the use of the cylindrical cavity is that it is simple to machine as well as to install where it is to be used. This implies that it can easily be used on different sites and can also be moved from position to position without interfering with the gas characteristics. On the other hand, working with the cylindrical cavity results in the unequal distribution of admittances between different operational modes. This is however a challenge that is faced with most of the other AGT shapes as well. As such, identifying the challenges is an effective step towards addressing them and subsequently ensuring that they are well handled.
The determination of Boltzmann’s constant through AGT is described as less dependent on correction compared to the previous results including dielectric constant gas thermometry. As such, it provides accuracies above those expected from conventional methods. Gavioso et al (2010) found out through experimentation that more accurate results of the Boltzmann’s constant are obtained through the use of a single- state helium gas as it is the most inert of all the gases. Moreover, it is also light and its velocity is higher than most of the other gas molecules. In this case, the obtained frequency ratios are bound to be more accurate than any others obtained through other thermometry approaches or through acoustic cavities filled with other gases. The determination of Boltzmann’s constant relates the kinetic energy of the gas particles to the thermodynamic temperatures of the gas. This means that from the temperatures obtained through frequency ratios, the first equation can be used to determine Boltzmann’s constant. Conducting a reiterative process in this is a good way to enhance the accuracy of the results and subsequently provide a reasonable constant for the particular gas used. Higher kinetic energies automatically result in higher values for the Boltzmann’s constant. This implies that the lighter the gas, the higher the probability that it will result in a higher value for the Boltzmann’s constant. This is expected based on past records of the Boltzmann’s constant (Feng et al undated).
Molar Gas Constant R from AGT
Besides measuring Boltzmann’s constant, AGT has also been confirmed to be an accurate approach in the determination of the molar gas constant. Gavioso et al (2015) explored the available approaches towards this activity, with results that indicate that AGT is a more accurate method compared to others such as the black body radiation thermometry and the thermodynamic gas thermometry approaches in the determination of the molar gas constant. Acoustic thermometry as applied in AGT gives frequency ratios which can be used together with the Boltzmann’s constant to help in determining the molar gas constant. Relations that provide the associations between the molar gas constant and the characteristics of the gas such as the relative molecular mass and room mean square velocity can be applied in the chemical mole equations to give the results of the molar gas constant. The use of the inert gases ensures that the values obtained are not only accurate but also acceptable in the engineering industry. Gavioso et al (2015) report that the determination of the molar gas constant is more of a function of thermodynamic properties than it is a property of the cavity.
Triple Point determination from AGT
While the conventional approach in the application of AGT is to determine an unknown gas temperature, Pitre et al (2006) provide a non- common approach to its application. In their work, Pitre et al determined that by using AGT, it was possible to determine the triple point of various gases. This applies the second equation outlined previously as a starting point just like in the other equations. The authors used a quasi- spherical cavity, which was filled with neon, argon or mercury depending on the gas whose triple point was to be determined. Based on their findings, it was established that such an approach could result in more accurate results compared to other approaches. For instance, Pitre et al (2006), established that AGT resulted in uncertainties that were either less than or comparable to those realized when using a constant volume gas thermometer to determine the triple points of the targeted substances. At the same time, the results obtained were found to be comparable to the results obtained when using the dielectric constant gas thermometers in the determination of triple points of the named gases.
The principle of determination of triple points is founded on the application of a variety of frequency ratios from the initial cavity resonance determinations. Through comparison of microwave and acoustic frequencies, it will be possible to determine unknown temperatures, which align to the triple points of the gases. The main challenge can however be in the determination of the actual triple points of the gas in the cavity. This requires close monitoring of the gas and subsequent accurate measurement of the frequencies at the observed point of state transition. Temperature determination in the use of AGT for the triple point measurement depends on the availability of many temperature positions, and this can only be accomplished if the results are collected with high levels of accuracy. This comes with the need for calibration of the thermometer at different temperature values since there is no possibility of obtaining midpoint temperatures through manipulation of particular temperatures considered to enclose the desired range (Moldover et al 2016). Consequently, in determining the triple point of the gases, such calibration is mandatory as the temperatures recorded can be inaccurate. The calibrations then guide the user into the most realistic results. The differences in gas characteristics imply that different calibrations have to be used for each of the gases to be measured.
Acoustic Gas Thermometry is one of the fastest emerging trends in thermometry, especially applicable in meteorology. In comparison to other modern technologies in thermometry, AGT provides the user with the capacity to create standards pertaining to the molar gas constant, Boltzmann’s constants, triple point temperatures and unknown gas temperatures. The determination of unknown gas temperatures is the starting point of each of the applications of the AGT. In the determination of Boltzmann’s and the molar gas constants, the values are derived from the first and second equations as well as from the molar gas constant relations. On the other hand, triple point determination relies on calibrations conducted at different temperatures due to the difficulties associated with approximating temperatures based on other temperature values.
Feng XJ, Zhang JT, Lin H, Gillis KA and Moldover MR undated. Determination of the Boltzmann constant using the differential cylindrical procedure. The National Institute of Standards and Technology, USA. Retrieved from www.arxiv.org/ftp/arxiv/papers/1501/1501.02519.pdf
Gavioso RM, Benedetto G, Albo PAG, Ripa M, Merlon A, Guianvarch D, Moro F and Cuccaro R 2010. A determination of the Boltzmann constant from speed of sound measurements in helium at a single thermodynamic state. Metrologia, vol. 47, no. 4, pp. 387- 409. www.iopscience.iop.org/article/10.1088/0026-1394/47/4/005
Gavioso RM, Ripa DM, Steur PP, Gaiser C, Troung D, Guianvarch D, Tarizzo T, Stuart FM and Dematteis R 2015. A determination of the molar gas constant R by acoustic thermometry in helium. Metrologia, vol. 52, no. 5. Retrieved from www.iopscience.iop.org/article/10.1088/0026-1394/52/5/S274/meta
Moldover MR, Gavioso RM, Mehl MD, Pitre L, de Podesta M, Zhang JT 2014. Acoustic gas thermometry. Metrologia, vol. 51, no. 1, pp. R1- R19. Retrieved from www.iopscience.iop.org/article/10.1088/0026-1394/51/1/R1/pdf
Moldover MR, Tew WL and Yoon HW 2016. Advances in thermometry. Nature physics, vol. 12, pp. 7- 11. Retrieved from www.nature.com/nphys/journal/v12/n1/pdf/nphys3618.pdf?origin=ppub
Pitre L, Moldover MR and Tew WL 2006. Acoustic thermometry new results from 273 K to 77 K and progress towards 4 K. Metrologia, vol. 43, no. 1. Retrieved from www.iopscience.iop.org/article/10.1088/0026-1394/43/1/020