Saturday, January 25, 2020

Wireless networks: Security

Wireless networks: Security WIRELESS networks ,due to ease of installation ,cost benefits and the capability of connectivity , hence communication anywhere ,has made it the most popular way of network setup in this 21st century. With increase in the need of mobile systems, the current electronic market has also been flooding with laptops, pdas, RFID devices, healthcare devices and wireless VOIP (Voice over IP) which are WIFI (Wireless Fidelity) enabled. With the 3G (Third Generation) and 4G (Fourth Generation) cellular wireless standards, mobiles phones are also WIFI enabled with very high speed being provided for data upload and download .Nowadays a malls and public areas not mention even cities are WIFI capable, enabling a person to access the internet or even contact a remote server in his office from anywhere in that city or even from his mobile phone while just strolling down the road. But as every good technology has its own drawbacks so does the wireless networks .Just as in the case of wired networks they are also prone to intruder attacks or more commonly known as Wireless hacking thus compromising the networks , security, integrity and privacy. The basic reason for this is when the wireless network was first introduced, it was considered to have security and privacy built into the system while transmitting data. This misconception had basically arisen because wireless system transmitters and receivers used spread spectrum systems which have signals in the wide transmission band. Since the RF(Radio Frequency ) receivers which at that time could only intercept signal in the narrow transmission band these wireless signals were potentially considered in the safe zone .But it did not take long to invent devices that could intercept these wireless signals as well .Hence the integrity of data send over wireless networks could be easily compromised .With the developme nt of technology so has the methods and ways in which a network can be attacked become more vicious . Fig-1: WLAN (Wireless Local Area Network) Security of wireless networks against such vicious attacks is hence the become the priority for the network industry. This is because not all networks are equally secure .The security depends on where this network is used. For example, if the requirement of the wireless is to provide a wireless hotspot in a shopping mall then then the security of this is never concerned with but if its for a corporate they have their own security authentication and user access control implemented in the network. II. WHY WIRELESS networks are prone to attacks? There are number of reasons why wireless networks are prone to malicious attacks .These are the most challenging aspects to eb considered when a secure wireless network has to be established. a) Wireless network are open networks: The reason for this is that there is no physical media protecting these networks .Any packet transmitted and received can be intercepted if the receiver has the same frequency as the transmitter receiver used by h wireless network .There is also a common misconception that if the authentication and encryption are properly used the network will not be compromised .But what about the messages send back and forth before the authentication and encryption comes into play ? b) Distance and Location: The attacker can attack from any distance and location and is only limited by the power of the transmitter .Special devices have been designed which can attack even short distance networks such the Bluetooth c) Identity of the Attacker: Attacker can always remain unidentified because he uses a series of antennas or other compromised networks before reaching the actual target. This makes wireless network attackers very difficult to track. Some of the reasons why such attacks are so common is because of the easy availability of information from none other than the Internet, easy to use cheap technology and of course the motivation to hack . III. wireless hacking step by step To understand the security protocols for wireless networks currently in use, first it is important to understand the methods through which a weak network is attacked by a hacker .These are also known as wireless intrusion methods . A. Enumeration: Also know as network Enumeration, the first and foremost step to hacking which is finding the wireless network. The wireless network could be any specific target or even a random weak network which can be compromised and used to attack other end systems or networks .This feat is achieved by using a network discovery software which are now a days available online in plenty, to name a few are Kismet and Network stumbler . In order to have more information about the network, the packets that are send and received by the network can sniffed using network analyzers also known as sniffers .A large number of information can be obtained by using this including IP address, SSID numbers even sensitive information such as MAC address , type of information and also the other networks that this compromised end system. Yet another problem faced is the use of network mappers which can be used to find he servers that run these compromised networks hence also attacking these servers which could then affect proper functioning and information transfer between these servers and to other networks connected to it . B. Vulnerability Assesment: This is mainly done by the hacker y using a vulnerability scanner .After the hacker has found the network he want to attack he uses this program in order to detect the weakness of the computer , computer systems networks or even applications. After this the intruder decided on the most possible means of entry into the network. C. Means of Entry: IV. TYPES OF THREATS ATTACKS A. Eaves Dropping and Traffic Analysis: This is the form of attack that makes use of the weak encryption of the network .This always compromises the integrity and security of the network .All attacks such as war driving , war chalking ,packet sniffing traffic analysis all fall under this category B. Message Modification: These attacks are mainly used to modify the data that is send across a network .The modification might be giving wrong information or also adding malicious content to the data packet send form one station to another .This compromises the integrity and privacy of the Data . C. Rogue Devices: Theses could be devices such as APS , application software programs which has been compromised by the intruder and made to function according to him/her. Such devices can compromise the integrity of the network as well as the data send across it .These devices can also launch reply attacks and also make the network associated to malicious content websites or information. D. Session Hijacking: This attack occurs after a valid session has been established between two nodes to through the AP.In the attacker poses as a valid AP to the node trying to establish connection and a valid node to the AP .The attacker can then send malicious or false information to the node that the connection has already been established with .The legitimate node believe that the AP has terminated he connection with it . The hacker can then use this connection to get sensitive information from the network or the node . E. Man In the Middle Attacks: This is similar to that of a session hijacking attack but in this case it is a rogue AP that acts as valid client to the legitimate AP and valid AP to the legitimate client .Once this has been established the rogue AP can access all information from the , intercept communication , send malicious information to other clients through this . These are just few of the security threats and attacks in wireless environments .With the advancing technologies there many more possible security threats that can be faced by these networks in the future. V. BASIC REQUIREMENTS IN WIRELESS NETWORK SECURITY With the vulnerability of wireless networks ,security and countering o such malicious attacks have become one of the top priorities addressed by enterprises ,corporate as well as research fields in IT .There are many pints to be considered when the security of a network is concerned the most important f which are : authentication, accountability and encryption . A. Authentication: This is very familiar to anyone using a network in his or her work place or even accessing he email on the internet and the very first step in promoting a secure wireless network . .There many different ways of authentication and many different tools and methods have been used over the years in order.. make the primary process, more reliable and fool prof.Some of the most widely used methods are : a) User name and Password combinations generally defined as something that a person knows. b) Smart Card, RFIDs and Token technologies also known as something that a person has c) Biometric Solutions such as finger printing , retina scanning which can be generally defined as something that a person is or are. Now the reliability of each one of these methods can vary depending on the level on which it has been implemented .In the case very low level authentication s only one kind of method I used to secure the network .One of the weakest forms of authentication can be considered as the use of only ID card or token technologies as if a person looses this , he can compromise the security of the network .Even in the case of username and password the strength of the authentication is only as good as the complexity of the information used as username or even password .People generally prefer to use passwords that are easy to remember but also known to many other people in that organization or even outside One of the much better ways of securing a network through authentication is to use biometric solutions such as fingerprinting or retina scanning .But of course technology has advanced to the extend that even fingerprints or even retinas can be forged .Nowadays a number of methods of combinatio nal methods are used as authentication with high security premises or networks guarded by more than two or three kinds of authentications . B. Accountability After a user has been authenticated to use the network it is important to have t able to track the computer usage of each person using the network so that incase of any foul play the person responsible can be held responsible .When the networks were very small it was very easy f a network administrator to track the usage of each person on a network .But with huge networks, remote access facilities and of course the wireless networks it has become quite a difficult task .AS mentioned earlier , there are many ways in which a hacker can make himself difficult to track down .Many softwares and firmwares have been created which is used in conjecture with the authentication protocols inoder to make the wireless network more secure and robust . C. Encryption: This is the most important step in building and securing a strong wireless network infrastructure .he steps generally followed for this are : a) Methods based on public key infrastructure (PKI) b) Using high bit encryption scheme c) Algorithm used for encryption must be well known and proven to be very unbreakable. Current wireless network security solutions can be classified into three broad categories: a) unencrypted solutions b)encrypted solutions c) combination. In this paper with emphasis as explained in the abstract will eb on encrypted solutions for wireless security. A brief discussion on the unencrypted methods has still been given for basic understanding. I n the case of encryption based security protocols ,a details description is given about the ones that are commonly used in wireless LANS in this paper .After which the latest and developing technologies will be discussed .The three major generations of security as existing today and also cited in many papers ,journals and magazines are as follows : 1) WEP (Wired Equivalent Privacy) 2) WPA (Wi-Fi Protected Access) 3) WPA2 The image below shows the layer in which the wireless network security protocols come into play which is of course the link layer: Fig-1: 802.11 AND OSI MODEL VI. WIRELESS SECURITY UNENCRYPTED A. MAC Registration: This is one of the weakest methods network security..MAC registration was basically used to secure university residential networks as college apartments or dorm rooms. The basic way of doing this is to configure DHCP (Dynamic Host Configuration Protocol) to lease IP address to only a know set of MAC address which can be obtained manually by running automated scripts on a network server so basically any person with a valid registration can enter into the network .Session logs also cannot be generated because of which accounting of the logs become impossible. Last but not the least since this method of securing was basically used for switched and wired networks encryption was never included. B. Firewalls: In this method, network authentication is one through either HTTP( Hyper text Transfer Protocol),HTTPS or telnet .When an authentication requirement is received by the network it is directed to the authentication server .On validating the authentication the firewalls add rules to the IP address provided to that user , This IP address also has timer attached to it in order to indicate the rule time out of this IP address. When executed through HTTPS it is basically a session based as well as a secure process .But any other process which is adapted from a switched wired network firewalls does not provided encryption. C. Wireless Firewall Gateways : One of the most latest as well as considerably fool proof method in unencrypted solutions in Wireless Firewall Gateways or WFGs.This is a single wireless gate way is integrated with firewall, router, web server and DHCP server and its because of all these being in one system that makes WFGS a very secure wireless security solution. When a user connects to the WFG, he/she receives a IP address form the DHCP serve .Then the web server ( HTTPS) asks for a user name and password and this is executed by the PHP ( Hypertext Preprocessor).Address spoofing and unauthorized networks are avoided by PHP as the DHCP logs are constantly compare with the current updated ARP(Address Resolution Protocol).This verifies that the computer that is connect to the network is using he the IP address that has been leased to it by the DHCP server .Then this information is passed on to the authentication server which in turn adds rules to this IP address .Up ne the expiration of the DHCP lease the sessions ar e terminated . The WFGS hence make the authentication and accountably pat f the network more reliable ,But as this is also an unencrypted method it lacks the most important accept of security. VII. WEP-WIRED EQUIVALENT PRIVACY This protocol was written in accordance with the security requirements required for IEE 802.11 wireless LAN protocol .IT is adapted from the wired LAN system and hence the security and privacy provided by it is also equivalent to the security and privacy provided a wired LAN. Through its an optional part of wireless network security, it will give a considerably secure networking environment. The algorithm used in WEP is known as the RC4(Rivest Cipher 4) .In this method a pseudo random number is generated using encryption keys of random lengths .This is then bound with the data bits using a OR(XOR) functionality in order t generate an encrypted data that is then send .Too look at in more in detail : A. Sender Side: The pseudo random number is generated using the 24 bit IV(initialization Vector ) given by the administrator network and also a 40 r 104 bit secret key or WEP key given by the wireless device itself. Which is then added together and passed on to theWEP PRNG (Pseudo Random Number Generator).At the same time the plain text along with an integrity algorithms combined together to form ICV (integrity check value) .The pseudo number and the ICV are then combined together to form a cipher text by sending them through an RC4.This cipher text is then again combined with IV to form the final encrypted message which is then send. Fig-2: WEP SENDER SIDE B. Receiver Side: In the receiver side the message is decrypted in five steps .Firs the preshared key and the encrypted message are added together .The result is then passed through yet another PRNG .The resulting number is passed through an CR4 algorithm and this resulting in retrieving the plain text .This again combines with another integrity algorithm to form a new ICV which is then compared with the previous ICV t check for integrity. Fig-3: WEP RECIEVER SIDE C. Brief Descriptions: a) Initialization Vector : are basically random bit the size f which is generally 24 bits but it also depends on the encryption algorithm .This IV is also send to the receiver side as it is required for decrypting the data send . b) Preshared Key: is more or less like a password .This is basically provided by the network administrator and is shared between the access point and all network users c) Pseudo Random Number Generator: This basically creating a unique secret key for each packet sends through the network. This is done by using some 5 to at most 13 characters in preshared key and also by using randomly taken characters from IV. d) ICV and Integrated Algorithm: This is used to encrypt the plain text or data and also to create a check value which can be then compared y the receiver side when it generates its own ICV .This is done using CRC (Cyclic Redundancy Code) technique to create a checksum .For WEP, the CRC-32 of the CRC family is used. D. RC4 Algorithm: RC$ algorithm is not only proprietary to WEP .IT can also be called a random generator, stream cipher etc .Developed in RSA laboratories in 1987 , this algorithm uses logical functions to be specific XOR to add the key to the data . Figure 5: RC4 Algorithm E. Drawbacks of WEP: There are many drawbacks associated with the WEP encryptions. There are also programs now available in the market which can easily hack through these encryption leaving the network using WEP vulnerable to malicious attacks: Some of the problems faced by WEP: WEP does not prevent forgery of packets. WEP does not prevent replay attacks. An attacker cans simply record and replay packets as desired and they will be accepted as legitimate WEP uses RC4 improperly. The keys used are very weak, and can be brute-forced on standard computers in hours to minutes, using freely available software. WEP reuses initialization vectors. A variety of available Cryptanalytic methods can decrypt data without knowing the encryption key WEP allows an attacker to undetectably modify a message without knowing the encryption key. Key management is lack and updating is poor Problem in the RC-4 algorithm. Easy forging of authentication messages. VIII. WPA -WIFI PROTECTED ACCESS WPA was developed by the WI-FI alliance to overcome most of the disadvantages of WEP. The advantage for the use is that they do not have t change the hardware when making the change from WEP to WPA. WPA protocol gives a more complex encryption when compared to TKIP and also with the MC in this it also helps to counter against bit flipping which are used by hackers in WEP by using a method known as hashing .The figure below shows the method WPA encryption. Figure 6: WAP Encryption Algorithm (TKIP) As seen it is almost as same as the WEP technique which has been enhanced by using TKIP but a hash is also added before using the RC4 algorithm to generate the PRNG. This duplicates the IV and a copy this is send to the next step .Also the copy is added with the base key in order to generate another special key .This along with the hashed IV is used to generate the sequential key by the RC4.Then this also added to the data or plan text by using the XOR functionality .Then the final message is send and it is decrypted by using the inverse of this process. A. TKIP (Temporal Key Integrity Protocol): The confidentiality and integrity of the network is maintained in WPA by using improved data encryption using TKIP. This is achieved by using a hashing function algorithm and also an additional integrity feature to make sure that the message has not been tampered with The TKIP has about four new algorithms that do various security functions: a) MIC or Micheal: This is a coding system which improves the integrity of the data transfer via WPA .MIC integrity code is basically 64bits long but is divided into 32 bits of little Endean words or least significant bits for example let it be (K0 , K1) .This method is basically used to make that the data does not get forged . b) Countering Replay: There is one particular kind of forgery that cannot me detected by MIC and this is called a replayed packet .Hackers do this by forging a particular packet and then sending it back at another instance of time .In this method each packet send by the network or system will have a sequence number attached to it .This is achieved by reusing the IV field .If the packet received at the receiver has an out of order or a smaller sequencing number as the packet received before this , it is considered as a reply and the packet is hence discarded by the system . c) Key mixing: In WEP a secure key is generated by connecting end to end the base layer which is a 40 bit or 104 bit sequence obtained for the wireless device with the 24 bit IV number obtained from the administrator or the network. In the case of TKIP, the 24 bit base key is replaced by a temporary key which has a limited life time .It changes from one destination to another. This is can be explained in Phase one of the two phases in key mixing. In Phase I, the MAC address of the end system or the wireless router is mixed with the temporary base key .The temporary key hence keeps changing as the packet moves from one destination to another as MAC address for any router gateway or destination will be unique. In Phase II, the per packet sequence key is also encrypted by adding a small cipher using RC4 to it. This keeps the hacker from deciphering the IV or the per packet sequence number. d) Countering Key Collision Attacks or Rekeying : This is basically providing fresh sequence of keys which can then be used by the TKIP algorithm .Temporal keys have already been mentioned which has a limited life time .The other two types f keys provided are the encryption keys and the master keys .The temporal keys are the ones which are used by the TKIP privacy and authentication algorithms . B. Advantages of WPA: The advantage of WPA over WEP can be clearly understood from the above descriptions .Summarising a few: a) Forgeries to the data are avoided by using MIC b) WPA can actively avoid packet replay by the hacker by providing unique sequence number to each packets. c) Key mixing which generates temporal keys that change at every station and also per packet sequence key encryption. d) Rekeying which provides unique keys for that consumed by the various TKIP algorithms. IX. WPA2-WIFI PROTECTED ACCESS 2 WPA 2 is the as the name suggests is a modified version of WPA in which Micheal has be replaced with AES based algorithm known as CCMP instead of TKIP .WPA can operate in two modes: one is the home mode and he enterprise mode .In the home mode all he users are requires to use a 64 bit pass phrase when accessing the network. This is the sort encryption used in wireless routers used at home or even in very small offices. The home version has the same problems which are faced by users of WEP and the original WPA security protocol. The enterprise version is of course for used by larger organisation where security of the network is too valuable to be compromised .This is based on 802.1X wireless architecture , authentication framework know as RADIUS and the another authentication protocol from the EAP ( Extensible Authentication Protocol ) Family which is EAP-TLS and also a secure key . A. 802.1X: Figure 7: 802.1X Authentication Protocol In order to understand the security protocols used in WPA2 it is important know a little bit about the 802.1X architecture for authentication. This was developed in order to overcome many security issues in 802.11b protocol. It provides much better security for transmission of data and its key strength is of course authentication There are three important entities in 802.1x protocol which is the client, authenticator and authentication. a) Client : is the STA(station) in a wireless area network which is trying to access the network ,This station could be fixed , portable or even mobile. It of course requires client software which helps it connect to the network. b) Authenticator: This is yet another name given to an AP (Access Point).This AP receives the signal from the client and send it over to the network which the client requires connection from There are two parts to the AP i.e. the non control port and the control port which is more of a logical partitioning than an actual partition..The non control port receives the signal and check its authentication to see if the particular client is allowed to connect to the network .If the authentication is approved the control port of the AP is opened for the client to connect with the network. c) Authentication: RADIUS (Remote Authentication Dial in User Service) server .This has its own user database table which gives the user that has access to the he network, this makes it easier for the APs as user information database need not be stored in the AP .The authentication in RADIUS is more user based than device based .RADIUS makes the security system more scalable and manageable. Figure 8: EAP/RADIUS Message Exchange B. EAP (Extended Authentication Protocol): The key management protocol used in WAP2 is the EAP (Extended Authentication Protocol).It can also be called as EAPOW (EAP over wireless).Since there are many versions of this protocols in the EAP family it will advisable to choose the EAP protocol which is very best suited for that particular network .The diagram and the steps following it will describe how a suitable EAP can be selected for that network : a) Step1: By checking the previous communication records of the node using a network analyser program, it can be easily detected if any malicious or considerably compromising packets has been send to other nodes or received from to her nodes to this node . b) Step 2: By checking the previous logs for the authentication protocols used, the most commonly used authentication protocol used and the most successful authentication protocol can be understood. Figure 9: EAP Authentication with Method Selection Mechanism c) Step 3: The specifications of the node itself have to be understood such as the operating system used the hardware software even the certificate availability of the node. After all this has been examined the following steps can be run in order to determine and execute the most suitable EAP authentication protocol: 1. Start 2. if (communication_record available) then read communication_record; if(any_suspicious_packets_from_the_other_node) then abort authentication; go to 5; else if (authentication record available) then read authentication record; if (successful authentication available) then read current_node_resources; if (current_node_resources comply with last_successful_method) then method = last_successful_method; go to 4; else if (current_node_resources comply with most_successful_method) then method = most_successful_method; go to 4; else go to 3; else go to 3; else go to 3; else go to 3; 3. read current_node_resources; execute method_selection(current_node_resources); 4. execute authentication_process; 5.End X. RSN-ROBUST SECURITY NETWORKS RSN was developed with reference to IEEE 802.11i wireless protocol .This connection can provide security from very moderate level to high level encryption schemes .The main entities of a 802.11i is same as that of 802.1x protocol which is the STA (Client), AP and the AS (authentication server).RSN uses TKIP or CCMP is used for confidentiality and integrity protection of the data while EAP is used as the authentication protocol. RSN is a link layer security i.e it provides encryption from one wireless station to its AP to from one wireless station to another..It does not provided end to end security IT can only be used for wireless networks and in the case of hybrid networks only the wireless part of the network . The following are the features of secure network that are supported by RSN ( WRITE REFERENCE NUMBER HERE) : a) Enhanced user authentication mechanisms b) Cryptographic key management c) Data Confidentiality d) Data Origin and Authentication Integrity e) Replay Protection. A. Phases of RSN: RSN protocol functioning can be divided in the five distinct phases .The figure as well as the steps will describe the phases in brief: a) Discovery Phase: This can also be called as Network and Security Capability discovery of the AP.In this phase the AP advertises that it uses IEE 802.11i security policy .An STA which wishes to communicate to a WLAN using this protocol will up n receiving this advertisement communicate with the AP .The AP gives an option to the STA on the cipher suite and authentication mechanism it wishes to use during the communication with the wireless network. Figure 9: Security States of RSN b) Authentication Phase: Also known as Authentication and Association Phase .In the authentication phase, the AP uses its non control part to check the authentication proved by the STA with the AS .Any other data other than the authentication data is blocked by the AP until the AS return with the message that the authentication provided by the STA is valid .During this phase the client has no direct connection with the RADIUS server . c) Key Generation and Distribution: During this phase cryptographic keys are generated by both the AP and the STA. Communication only takes place between the AP and STA during this phase. d) Protected Data Transfer Phase: This phase as the name suggest is during which data is transferred through and from the STA that initiated .the connection through the AP to the STA on the other end of the network. e) Connection Termination Phase: Again as the name suggests the data exchanged is purely between the AP and the STA to tear down the connection

Friday, January 17, 2020

Lord of the Flies vs. the Destructors Essay

Fiction looks at all ranges of topics through the eyes of so many diverse characters. Lord of the Flies and The Destructors is no different in the sense you see two extremely striking situations through the eyes of surprising characters. These stories both take a look at society and the primitive aspects it can have. The main characters in the story are both children of young ages exhibiting surprising and sometimes extremely shocking behavior displaying a loss of innocence. They differ in the sense that Lord of Flies looks at how savage a human can get in desperate situations while the other is how savage a person can get against a society that feel victimized against. These two novels have similarities that can be easily identified. They both display groups of adolescents that are interacting with extreme situations. Lord of Flies depicts children stranded on an island and they must come together in order to find these solutions. Desperation sets in which motivates them to start acting more and more savage as time goes on. This is similar to The Destructors because the short story displays a similar group of young children who display savage behavior to a community. While one is a residential community and another is an island, the island represents a community for these boys for the time of the story because they are stranded upon it. Both stories display a power struggle through two characters in them. Lord of The Flies shows this through Jack and Ralph and In the Destructors this is seen through Trevor and Blackie. Jack and Ralph both attempted to become chief of the new tribe, Ralph winning by a few votes. However, as time goes on their primitive behaviors shine through creating a divide between the children and Jack develops his own tribe. Jack’s influence motivates the children to become violent and savage toward Ralph and his group, resulting in killing one of Ralph’s friend, Piggy. All of the teamwork and civil behavior that Ralph represents is slowly gone until the children all turn into monsters, which Jack represents. Ralph was about structure and finding a rescue, which is evident in his design of two groups, one for food and one for a fire signal while jack was all about savage behavior and power over the other children. In the Destructors, Blackie and Trevor both have potential to be the leader of the Wormsley Common Gang and it can be seen through their dialogue that they are both aware that they want it. Blackie tries to display this by attempting to prevent Trevor from voting on what kind of trouble they get into when he late to their meeting but Trevor does not allow him. The peak of this struggle is when are discussing ideas and Trevor tells them about destroying Old Misery’s house from the inside. Blackie attempts his best to discourage with the potential of police and the inability to accomplish this but Trevor continue to push the idea until it is voted for and chosen. This symbolized the end of Blackie’s reign of the group and when one member asks â€Å"How do we start?† Blackie simply walks away saying, â€Å" He’ll tell you.† Implying that he knows what has occurred and realizing his role of leadership is taken over. Both groups in each story displayed how easily a dynamic can change through Power. When you look at the stories from another angle, you can see that the messages they have differ extremely. Lord of the Flies was all about human nature and the ends it can go. This novel is a timeless one because of the message it sends through the least likely characters, young boys. The Destructors is a more believable story because the type of violence that is seen in the story. While damage to someone’s home is awful and the manner in which they did it was extremely special, Lord of The Flies uses violence against one another and results in psychotic breaks and children losing their lives at the hands of others. The longer these children are with one another, they start to lose more of their humanity and gain more primal instincts in ways of acting. Jack is the best candidate to display this because of how he grows more and more corrupt. After starting his own tribe, he has enabled himself to dictate what he feels his followers should do. He allowed them to become savage as well. If he felt that other children needed to be punished, he felt not hesitation and even was to the point of murdering another child. He started wearing clay masks, which represents the symbolism of having a new more primal faà §ade. The novel wraps up with Ralph being rescued but crying because he reflects on everything that has happened and how far these young children have fallen and to what points they all reached. The Destructors really depicts a group of children who aim to destroy a neighborhood leaving an old man’s house for last. These children differ from the ones in Lord of The Flies because though they do some pretty questionable acts, it is more delinquency rather than primal acts. These boys are doing violent actions because of the violence through the war they witness around them. With World War II going on, these children are witness to bombings often leaving them feeling with the need to do something. They decide to become a gang that will make their mark around London, causing crime one more extreme than the next. Trevor motivates these boys to destroy an old man’s house but instead of normally destroying it while he is away, they decide to wreck it from the inside out. Trevor says, â€Å"We’d be like worms, don’t you see, in an apple.† (pg. 12) However, mid construction the old man, Old Misery, comes home unexpected and is locked away until the job is finished. The ending displays Old Misery sobbing as his house is destroyed and the lorry that was around ended the story by laughing saying â€Å"There’s nothing personal but you got to admit it’s funny.† (pg. 22) This is actually the exact opposite reaction of what Lord of The Flies displayed because even though Mr. Thomas was sobbing at his loss similar to Ralph’s reaction, the Lorry laughed at the comedy of the situation. These stories all depict children doing things that typically we would not expect to see in society. However, the lack of a society in both novels has allowed behavior of this magnitude to occur. These stories show us that though they are different kinds of crimes and in different context, society is what can be considered the common thread through both stories. Society and it’s influence can really effect the people that are in it and if you are in a society that doesn’t provide a positive structure, you could display the actions seen in Lord of The Flies or The Destructors.

Thursday, January 9, 2020

The Changing Attitudes Toward Athletics - 1270 Words

The changing attitudes toward athletics began in the mid 1820’s when sport became commercialized, publicized and organizations began to form. Harness Racing became the first modernized sport which seen change thanks to growth and the transformation of America. You first begin to see the formation of organization at the local, regional and national level. Rules became formal and written and legitimized by the organization where before, rules were based on local customs, so variations were plentiful. Competition also changed, going from local, to national and even international. People began to have the chance to establish themselves in sport with additional opportunities to make money. Professionals first began to emerge during this period as harness racing as the lines between spectator and participant became clearly defined. Public information is reported regularly through newspapers and journals and specialization of magazines and guides on sports began to appear, where rule s and statistics were publicized. Permanent structures for harness racing began to appear in cities. During the 1870’s, four critical steps occurred to legitimize racing, and thereafter, sport: the creation of the first establishment dedicated to racing (1871), the first sporting journal (1875), the formation of the National Association of Trotting Horse Breeders (1876) and the establishment of a standard breed of trotting horse (1879) The legitimacy as well as new income realities allowed money to beShow MoreRelatedPsychology of Business - Nike She Runs1414 Words   |  6 Pagesfunction within a globalized, diverse, highly competitive and rapidly changing market. This calls for new approaches, strategies, organizations and understandings. Based on a relevant case/problem, account for and discuss how a psychological perspective can qualify those.† Hand in date: 8th May 2014 Julie Ingemann Jensen 3 Pages Copenhagen Business School 2014 Nike currently stands as a dominant leader in the global athletic retail industry. Particularly amongst females Nike is seen as a popularRead MoreThe Role Of Media As A Powerful Medium That Shapes And Reflects The Beliefs, Attitudes, And Values Of Society992 Words   |  4 PagesCharacter Portraits Mass media is a powerful medium that shapes and reflects the beliefs, attitudes, and values of society. Through mass media, we are able to expand our knowledge and understanding of social concepts embodied in sports. Duncan(1992) conducted a study of female presentation in sports which examined male and female athletes presentation. She saw that there was a notable difference in the way commentators referred to men an women athletes. Women were referred to as â€Å"girls† or â€Å"women†Read MoreThe Effect of College Athletics on Academics 888 Words   |  4 Pagesscenes found around college campuses are athletic events, but where would these college sports be without their dedicated athletes? Student athletes get a lot of praise for their achievements on the field, but tend to disregard the work they accomplish in the classroom. Living in a college environment as a student athlete has a great deal of advantages as well as disadvantages that affect education and anti- intellectualism. Around the country, college athletic programs are pushing their athletes moreRead MoreCultural Background Of Hispanic Ethnicity1313 Words   |  6 Pagesthat could never be achieved outside the field of sports (Alamillo, n.d). Every sportsperson has an identity that contributes a lot to their concept of their self. Athletic identity has been defined as â€Å"the degree to which an individual identifies with the role of being an athlete† (Brewer, Van Raalte, Linder, 1993). This athletic identity is distinct from self-esteem of one’s physique, the perceived importance of one’s strength and fitness or of body attractiveness (Galloway, 2007). It is influencedRead MoreThe Importance Of Equal Pay, Rights And Opportunity For Women Involved With Sports1629 Words   |  7 Pagesthese figures and statistics show similar disparities in all sports with both women and men. Educating the readers, and providing basic knowledge and understanding in regards to equal pay, rights and opportunities for women involved in sports, athletics and leadership. Hopefully, this awareness should provide a foundation to provide more awareness to this matter and eventually attempt to increase the amount of rights and opportunities for women in sports. Are females who are involved with sportsRead MoreInside Out By Pete Docter And Ronnie Del Carmen Essay1164 Words   |  5 Pagesprocesses that appears is the aspect of easy temperament (256). Riley is a joyous little girl who promotes an excited and positive attitude quite constantly. The first portrayal of anger comes at the dinner table, but it is quickly deterred by a quick airplane technique that grasp the attention of the young toddler. The positive outlook and generally cheerful attitude is a slight but insightful indicator of an easy temperament child (256). Her entire disposition throughout the entire film stays majoritilyRead MoreGreat Prosperity and Growth in Americas 1920s1431 Words   |  6 Pagesand growth. Industries were booming, cities were growing, and people were changing. Americans were forgetting the old traditions and values that they used to live by. Americans were viewing and following new entertainments. Sports wer e one of these entertainments. Amateur and professional athlete’s performance in their own sport was key to the growth of athletics in the 1920’s, because they changed the American attitude towards sports, created legends, and increased popularity. The culture in theRead MoreEssay On Title 9 Law1490 Words   |  6 Pagesbe admitted to college. Even after admission, the colleges rarely accorded women educational scholarships. Besides, women were excluded and discriminated in the field of sports. For instance, the National Collegiate Athletic association which was regulatory body of college athletics did not offer any scholarships for women. This condition can only be described as naà ¯ve and chauvinistic since no society can thrive when half of its population are left out of a country’s development agenda. ThereforeRead MoreFirst Wave Feminism By Betty Friedan1171 Words   |  5 Pagesfirst rape crisis hotline. The Title IX of the Higher Education Act was passed by Congress in 1972, meaning that discrimination on the basis of sex in any educational program was prohibited. As a result, all-male schools began to include women and athletic programs had to sponsor and finance female sports teams. In 1973, the United States Supreme Court legalised abortion. Roe v. Wade was a decision made by the United States Supreme Court on the issue of abortion where the Court ruled a woman s decisionRead MoreAims and Values in School. 2.51639 Words   |  7 Pagesfollow and work towards to achieve the best outcome. * To attempt the accomplishment of a purpose; to try to gain; to endeavor and aim to do well. Values: In order to achieve goals and aims, one strives and endeavours to attain certain actions, however such actions will not be undertaken at the expense of core values. * Values offer principles and standards of behaviour that people and organisations follow. Values have a major influence on a person’s behaviour and attitude and serve as guidelines

Wednesday, January 1, 2020

Fabrication of yba2cu3o7-δ and determination of its superconducting transition temperature - Free Essay Example

Sample details Pages: 28 Words: 8414 Downloads: 10 Date added: 2017/06/26 Category Statistics Essay Did you like this example? Fabrication of YBa2Cu3O7-ÃŽÂ ´ and Determination of its Superconducting Transition Temperature A superconducting material is one which below a certain critical temperature exhibits, amongst other remarkable traits; a total lack of resistivity, perfect diamagnetism and a change in the character of the specific heat capacity. The BCS theory describes perfectly the phenomenon of superconductivity in low temperature superconductors, but cannot explain the interaction mechanism in high temperature superconductors. In order to determine the superconducting transition temperature of two laboratory fabricated batches of YBCO their resistivity and specific heat capacity were measured as functions of temperature. Don’t waste time! Our writers will create an original "Fabrication of yba2cu3o7-ÃŽÂ ´ and determination of its superconducting transition temperature" essay for you Create order From resistivity measurements the two batches were found to have transition temperatures of 86.8( ±0.8)K and 87.8( ±0.4)K respectively which were used to infer their oxygen contents of 6.82( ±0.01) and 6.83( ±0.01) atoms per molecule respectively. These agreed with XRD data and the literature upper value of the transition temperature of 95K (with an oxygen content of 6.95). Specific heat capacity measurements of the first batch gave questionable confirmation of these results, but could not be performed on the second batch due to time constraints. 19 January 2010Page 14 of 14Josephine Butler College I. Introduction and Theory A superconducting material is defined as one in which a finite fraction of the electrons are condensed into a superfluid, which extends over the entire volume of the system and is capable of motion as a whole. At zero temperature the condensation is complete and all of the electrons participate in the forming of the superfluid. As the temperature of the material approaches the superconducting transition temperature (or critical temperature, given by Tc) the fraction of electrons within the superfluid tends to zero and the system undergoes a second order phase transition from a superconducting to a normal state.[i] The phenomenon of superconductivity was first observed by Kamerlingh Onnes in Leiden in 1911 during an electrical analysis of mercury at low temperatures. He found that at a temperature around 4K the resistance of mercury fell abruptly to a value which could not be distinguished from zero.[iii] The next great leap in experimental superconductivity came in 1986 when MÃÆ' ¼ller and Bednorz fabricated the first cuprate superconductor[v]. After its lack of resistivity one of the most striking features of a superconductor is that it exhibits perfect diamagnetism. First seen in 1933 by Meissner and Ochsenfeld, diamagnetism in superconductors manifests itself in two ways. The first manifestation occurs when a superconducting material in the normal state is cooled past the critical temperature and then placed in a magnetic field which will then be excluded from the superconductor. The second appears when a superconductor (in its normal state) is placed in a magnetic field and the flux is allowed to penetrate. If it is then cooled past the critical temperature it will expel the magnetic flux in a phenomenon know as the Meissner effect.[vi] This can be seen qualitatively in figure 1. In 1957, Bardeen, Cooper and Schrieffer managed to construct a wave function in which electrons are paired. Know as the BCS theory of superconductivity it is used as a complete microscopic theory for superconductivity in metals. One of the key features of the BCS theory is the prediction of an energy gap, the consequences of which are the thermal and most of the electromagnetic properties of superconducting materials. The key conceptual element to this theory is the formation of Cooper pairs close to the Fermi level. Although direct electrostatic interactions between electrons are repulsive it is possible for the distortion of the positively charged ionic lattice by the electron to attract other electrons. Thus, screening by ionic motion can yield a net, attractive interaction between electrons (as long as they have energies which are separated by less than the energy of a typical phonon) causing them to pair up, albeit over long distances. Given that these electrons can experience a net attraction it is not unreasonable that the electrons might form bound pairs, effectively forming composite bosons with integer spin of either 0 or 1. This is made even more likely by the influence of the remaining electrons on the interacting pair. The BCS theory takes this idea one step further and constructs a ground state in which all of the electrons form bound pairs. This electron-phonon interaction invariably leads to one of the three experimental proofs of the BCS theory. A piece of theory known as the isotope effect provided a crucial key to the development of the BCS theory. It was found that for a given element the super conducting transition temperature, TC, was inversely proportional to the square root of the isotope mass, M (equation 1). TCà ¢Ã‹â€ ?M-12 (1)[vii] This same relationship holds for characteristic vibrational frequencies of atoms in a crystal lattice and therefore proves that the phenomenon of superconductivity in metals is related to the vibrations of the lattice through which the electrons move. However this only holds true for low temperature superconductors (a fact which will be discussed in more detail at a later stage in this section). Both of the two further experimental proofs of BCS theory come from the energy gap in the superconducting material. The first proof is in the fact that it was predicted and actually exists (figure 2) and the second lies in its temperature dependence. From band theory, energy bands are a consequence of a static lattice structure. However, in a superconducting material, the energy gap is much smaller and results from the attractive force between the electrons within the lattice. This gap occurs Ά either side of the Fermi level, EF, and in conventional superconductors arises only below TC and varies with temperature (as shown in figure 3). Figure 2: Dependence of the superconducting and normal density of states, DS and Dn respectively. From Superconductivity, Poole, C.P., Academic Press (2005), page164 At zero Kelvin all of the electrons in the material are accommodated below the energy gap and a minimum energy of 2Ά must be supplied in order to excite them across the gap. BCS theory predicts equation 2 which has since been experimentally proven, ΆT=0=CkBTC (2) [viii] where theoretically the constant C is 1.76 although experimentally in real superconductors it can vary between 1.75 and 2.45. Figure 3: Temperature dependence of the BCS gap function, Ά. Adapted from The Superconducting State, A.D.C. Grassie, Sussex University Press (1975), page43 As before stated it has been found that the first of these BCS proofs does not hold for high temperature superconductors. In these materials it has been found that in the relation stated as equation 1, the exponential tends towards zero as opposed to minus one half. This indicates that for high temperature superconductors it is not the electron-phonon interaction that gives rise to the superconducting state. Numerous interactions have been explored in an attempt to try and determine the interaction responsible for high temperature superconductivity but so far none have been successful. Figure 4: A plot of TC against TF derived from penetration depth measurements. Taken from Magnetic-field penetration depth in K3C60 measured by muon spin relaxation, Uemura Y.J. et al. Nature (1991) 352, page 607. In figure 4 it can be seen that the superconducting elements constrained by BCS theory lie far from the vast majority of new high temperature superconducting materials which appear to lie on a line parallel to TF, the Fermi temperature and TB, the Bose-Einstein condensation temperature, indicating a different interaction method. One of the most extensively studied properties of the superconductor is its specific heat capacity and how its behaviour changes with temperature (seen in figure 5). It is known that above the transition temperature the normal state specific heat of a material, Cn, can be given by equation 3 (below) which consists of a linear term from the conduction electrons and a cubic phonon term (the addition Schottky contribution has been ignored in this case and ÃŽÂ ³ and A are constants). Cn=ÃŽÂ ³T+AT3 (3)[ix] Due to the aforementioned energy gap it is also predicted by BCS theory that at the superconducting transition temperature there will be a discontinuity in the specific heat capacity of the material of the order 1.43 as seen in equation 4 (where CS is the superconducting state heat capacity) and figure 5. CS-ÃŽÂ ³TCÃŽÂ ³TC=1.43 (4)[x] However for high temperature superconductors this ratio is likely to be much smaller due to a large contribution from the phonon term in the normal state specific heat capacity. Figure 5: Heat Capacity of Nb in the normal and superconducting states showing the sharp discontinuity at TC. Taken from The Solid State Third Edition, H.M Rosenberg, Oxford University Press (1988), page 245 Now that the concept of the high temperature superconductor has been explained this report can return to one of the initial concepts of how the behaviour of resistivity changes with temperature. A low temperature superconductor is likely to obey the T5 Bloch law at low temperatures and so its resistivity will fall to zero in a non-linear region. In contrast the resistivity of a high temperature superconductor should fall to zero before it leaves the linear region. The resistivity profile of a high temperature superconductor can also be used to determine its purity. By comparing the range of temperatures over which the transition occurs with the transition temperature itself an indicator of purity can be determined (equation 5, where PI is the purity indicator and ΆT the magnitude of the region over which the transition occurs). In this case a value of zero would indicate a perfectly pure sample. ΆTTC=PI (5)[xi] Other than for scientific purposes, within the laboratory, the biggest application of superconductors at the moment is to produce to the large, stable magnetic fields required for magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR). Due to the costliness of high temperature superconductors the magnets used in these applications are usually low temperature superconductors. It is for this same reason that the commercial applications of high temperature superconductors are still extremely limited (that and the fact that all high temperature superconducting materials discovered so far are brittle ceramics which cannot be shaped into anything useful e.g. wires). Yttrium barium copper oxide (or YBCO) is just one of the aforementioned high temperature, cuprate superconductors. Its crystal structure consists of two CuO2 planes, held apart by a single atom of yttrium, either side of which sits a BaO plane followed by Cu-O chains. This can be seen in greater detail in figure 6. Figure 6: The orthorhombic structure of YBCO required for superconductivity. Adapted from High-Temperature Superconductivity in Curpates, A. Mourachkine, Kluwer Academic Publishers (2002), page 40 If the structure only has 6 atoms of oxygen per unit cell then the Cu-O chains do not exist and the compound behaves as an antiferromagnetic insulator. In order to create the Cu-O chains and for the compound to change to a superconductor at low temperatures it has to be doped gradually with oxygen. The superconducting state has been found to exist in compounds with oxygen content anywhere from 6.4 to 7 with optimal doping being found to occur at an oxygen content of about 6.95.[xii] This report intends to determine the superconducting transition temperature of a laboratory fabricated sample of YBCO. This will be achieved by measuring how both its resistivity and specific heat capacity vary as a function of temperature. II.I Fabrication and Calibration Methods To ensure an even firing of the sample within the furnace and to find out where in the furnace the heating profile was closest to that of the actual heating program, three temperature profiles of the furnace were taken while heating. The length of the furnace was measured with a metre ruler and found to be 35 ±1cm. Four k-type thermocouples were then evenly spaced (every 11.5 ±0.5cm) along the length of it, as can be seen in figure 7 below. Figure 7: Transverse section of the furnace. Thermocouples are numbered 1 to 4 and the length of the furnace surrounded by heating coils is shown in green, blocked at either end by a radiation shield. Temperature profiles were taken for each of the temperature programs displayed in figure 8; all started at room temperature and were left to run until the temperature displayed by the thermocouples had stopped increasing. Target Temp ( °C) Heating Rate ( °Cmin-1) Elapsed time between data (s) 350 10 180 650 15 180 950 10 300 Figure 8: Details of furnace programs used to obtain the temperature profiles shown in section III. While this was being done samples of YBCO were fabricated. The chemical equation for the fabrication of YBCO is as follows in equation 6 and the amounts of the reactants required to fabricate 0.025 mol are displayed in figure 9 Y2O3+4BaCO3+6CuIIOà ¢Ã¢â‚¬  2YBa2Cu3O7-ÃŽÂ ´ (6) Reactant Mol RMM (gmol-1) Mass (g) Y2O3 0.0125 225.81 2.8226 BaCO3 0.050 197.34 9.8675 CuIIO 0.075 79.54 5.9655 Figure 9: Quantities of reactants required to fabricate 0.025 mol YBCO. Relative molecular masses (RMMs) calculated using relative atomic masses The procedure for fabrication can be seen in figure 10 and using this technique two batches of YBCO were fabricated, the first yielded just one pellet and the second batch yielded four. Figure 10: Describes the steps taken during fabrication of superconducting YBCO samples. In order to obtain a more accurate value of the temperature within the sample space of the cryostat the resistance of a platinum thermometer was measured as a function of temperature. In order to do this a Pt100 platinum thermometer was varnished to one side of a cryostat probe and connected via a four point probe to a power source (as can be seen in figure 11), an ammeter and a voltmeter (Keithley 2000 DMMs). The ammeter and the voltmeter were connected to a computer in order that live data could be fed straight into a LabView program (appendix 2) which would record the data to both a much great accuracy and precision than could be done by a human. Although a stable and constant current was used it was felt, in the interest of good practise, necessary to add the live feed ammeter into the LabView program as tiny fluctuations in current could have potentially changed results which would not have been noticed otherwise. The probe was then placed in the sample space which was subsequently vacuumed (to a pressure of 810-4 Torr) and flushed with helium twice. The sample space was then left full of helium due to its high thermal conductivity. The cryostat was cooled with liquid nitrogen to a temperature of approximately 77K and the LabView program left to record the change in the resistance of the platinum thermometer (using Ohms law, V=IR) and its corresponding temperature (from the intelligent temperature controller or ITC) while the cryostat heated up naturally. The temperature increase function of the program was not used as leaving the cryostat to heat up as slowly as possible allowed data to be gathered over a much greater period of time which lead to a relationship with less error. This relationship was plotted in order that the temperature dependant resistance profile of the platinum thermometer could be incorporated into the LabView program for use in future experiments to determine more accura tely the temperature of the sample space. While this was being done the dimensions of the cut samples were measured using vernier callipers and weighed in order to determine a density for YBCO. Each dimension was measured six times (to reduce random error) by two different people (to reduce systematic error). The off cuts of each batch of YBCO were then sent off for X-ray diffraction analysis in order to determine the chemical composition of the fabricated samples. The diffraction was carried out using a wavelength of 1.54184Ç º. II.II Fabrication and Calibration Results, Analysis and Interpretation The three temperature profiles of the furnace can be seen below in figure 12. The results are slightly skewed due to one end of the furnace having been left open in order to allow the thermocouples to sit inside the furnace. This can be seen back in figure 7. The measurements were taken by eye over a 10 second time period. It was therefore decided that the errors on the time should be  ±5 seconds and the error on the temperature  ±1K, both of which are unfortunately too small to be seen on the profiles. The data points were fitted to cubic curves as this best matched the physical behaviour of the heating. Figure 12: Temperature profiles of the furnace. The temperature of the program is shown in black crosses and the temperatures of thermocouples 1, 2, 3 and 4 are shown in yellow, red, green and blue respectively. It can immediately be seen from figure 12 that, during the initial stages of heating, the temperatures of all of the thermocouples lag behind that of the furnace program, specifically those of the thermocouples at the open end of the furnace (1 and 2). This can be accounted for due to poor thermal insulation at the open end of the furnace. It can also be seen that as the furnace reaches its required temperature and begins its dwell time the temperatures of the thermocouples continue to rise for a short duration before also levelling out. The most likely reason for this is that once the furnace reaches its required temperature the program will instantaneously cut the current to the heating coils. They will still however have thermal energy in them which will leach through the ceramic inner of the furnace into the firing space itself. Another striking feature of the profiles that can be seen is that the longer the furnace has to reach the required temperature, the more linear the increase in temperature is throughout the furnace. It was therefore deduced that had the furnace been sealed at both ends with radiation rods and covers, then the centre of the furnace would be that which had a temperature profile closest to that of the furnace program. It was also decided that in order to ensure a steady, linear rate of heating, a slower increase in temperature would be used. The masses of the batches before and after calcinations were compared and were found to have decreased by an average of 2.44( ±0.01)% of their initial masses. This was expected as one of the by-products created during the calcination of BaCO3 is CO2 which would have been removed from the furnace during this heating period therefore reducing the mass of the compound. The weights of the samples from batch two before and after annealing were compared and it was found that each of the samples of YBCO had increased in mass by an average of 3.51( ±0.03)% of their initial masses. This was unexpected as during the annealing process the compound is reduced and so should lose mass. One possible explanation for this could be a simultaneous reduction and oxygen doping of the compound in order to try and fill the copper and oxygen chains shown in figure 6. The densities of both batches of YBCO were calculated by weighing each of the samples from that batch and dividing their masses by their measured volumes. The densities of batches one and two were found to be 5.25( ±0.04)gcm-3 and 3.5( ±0.1)gcm-3 respectively. The greater error stated with the value of the density of the second batch of YBCO is a result of an error on the mean being taken whereas the error on the density of the first batch is merely propagated from those of its volume and mass as there was only one sample. When literature values of the density of YBCO were consulted it was found that the compound has a variable density of anywhere from 4.4 to 5.3gcm-3.[xiii] When comparing this range to the experimentally determined values of this parameter it was found that the density of the first batch lay just inside the range whilst the density of the second batch lay well outside of the lower end of it. One possible reason for the very low value of the density of batch two could be that its samples were left in the press for less time than batch one during sintering. All samples were checked to see whether they exhibited the Meissner effect. All did and a photograph showing this can be seen below in figure 13 The X-ray analysis of the two laboratory fabricated batches of YBCO can be seen in figure 14 below. The intensities were recorded every 0.01 degrees and then scaled appropriately using the greatest intensity in order that they could be compared to each other. As can be seen in figure 14 when both data sets are overlaid negligible differences can be seen. This indicates that both batches have almost identical chemical compositions and structure. A reasonable amount of background noise can be seen accompanied by an offset from zero intensity which changes in magnitude as the angle of diffraction increases. This can be accounted for by two factors. The first being tiny random impurities in the batches obtained by fabrication outside of a totally clean environment. The second is that small levels of the initial reactants may have not formed the required compound during calcination and annealing. A standard diffraction pattern of YBCO produced using the same wavelength of radiation was taken from The Chemical Database Service and can be seen below in figure 15. When this is compared to the patterns of the two laboratory fabricated samples in figure 14 all of the same intensity peaks can clearly be identified. This would indicate that YBCO had been successfully fabricated. Figure 15: X-Ray diffraction pattern of YBCO6. Calculation of the structural parameters of YBa2Cu3O7-ÃŽÂ ´ and YBa2Cu4O8 under pressure, Ludwig H. A. et al., Physica C (1992) 197, 113-122. It was expected that the comparison of standard diffraction patterns of YBCO of different oxygen contents with those fabricated within the laboratory would allow their oxygen content to be deduced. This, however, could not be achieved as all of the standard patterns of YBCO found in journals and online databases from oxygen contents of 6 to 7 had extremely similar diffraction patterns. The resistance of the platinum thermometer was plotted against temperature and can be seen in figure 16. A linear relationship was fitted to the data as seen in figure 16 which produced a reduced chi squared value of 1.317 and equation 7. T=2.4958( ±0.0007)R+25.54( ±0.04) (7) The reduced chi value indicates a strong linear relationship while the equation of the line gives a resistance of 99.2( ±0.2)ÃŽÂ © at a temperature of 273.2( ±0.1)K. When compared to the technical data for this component which gives a resistance of 100.00ÃŽÂ ©[xiv] at a temperature of 273.15K, it shows very close correspondence although not within error. A temperature of one less significant figures accuracy had to be used in this calculation due to the inability of the ITC to measure temperature to any more than one decimal place. This slight difference between the reference and experimental values of the resistance of the Pt100 at a given temperature can be accounted for by the position of the ITCs heat sensor. This lies just outside of the sample space and would cause the ITCs heat sensor to detect a small increase in temperature before it was received by the Pt100 within the sample space. Thus causing the Pt100 to lag behind in temperature (even if only slightly) and would therefore cause the slightly lower resistance for the given temperature as calculated above and can be seen as a very slight systematic error. III.I Resistivity Methods One of the cut samples was fixed to the other side of the probe to the Pt100 with thermally insulating varnish and four copper wire contacts were painted onto it with electrically conductive silver paint. The separation of each of the four wires was measured with vernier callipers six times each by two different people for the same reasons as before and recorded for later calculation. A four point probe resistance measurement was used in order to avoid the indirect measuring of resistances other than just the sample resistance. The contact resistance and spreading resistance are also normally measured by a simple two point resistance measurement. The four point probe uses two separate contacts to carry current and two to measure the voltage (in order to set up a uniform current density across the sample) and can be seen in figure 17. In a four point probe the current carrying probes will still be subject to the extra resistances but this will not be true for the voltage probes which should draw little to no current due to the high impedance of the voltmeter. The potential, V, at a distance ,r, from an electrode carrying a current, I, in a material of resistivity, à ?, can be expressed by V=à ?I2à Ã¢â€š ¬r=à ?I2à Ã¢â€š ¬1S1+1S3-1S1+S2-1S2+S3 (8)[xv] where r has also been expressed in terms of the contact separations (figure 17). This can be rearranged in order to calculate the value of the resistivity of material being measured. The probe was once again inserted into the cryostat and the cryostat was cooled as detailed in section II.I. Once the sample had reached a temperature equal to that of the boiling point of liquid nitrogen a LabView program was left to run which recorded the resistance of the sample and its corresponding temperature. The program used to do this can be seen in appendix 2. Although a temperature increase function was built into the program, the cryostat was left to warm up naturally for the same reason used when calibrating the platinum thermometer. The set up for this can be seen below in figure 18. Figure 18. Schematic for the resistivity experiment. Vacuum pumps and pressure gauges have been omitted as well as the heater on the ITC as none of the bear any real relevance to the experiment. Data cables are shown in red, Pt100 in blue and sample in grey. This was repeated for each sample of fabricated YBCO at least twice and their temperature dependant resistivity profiles can be seen in section III.II III.II Resistivity Results, Analysis and Interpretations The resistance profile of the sample from the first batch was measured twice and these profiles can be seen in figure 19. Unfortunately it was not possible on this occasion to measure the four point probe contact separations on this first sample before it was removed and so these profiles could not be adjusted to those of resistivity using equation 8. However, as this transformation is simply a stretch in the y-axis, it does not change the behaviour of the transition or the value of the transition temperature obtained from the profile. It can be seen in figure 19 that although the first profile cuts out at approximately a temperature of 190 Kelvin, both profiles follow virtually the same path until that point. The first profile cuts out early due to data points being taken once every second causing the program to fail and shut down. The number of data points was then cut to one every three seconds for subsequent experiments. With measurements being taken automatically by computer (and with the Keithley multimeters ability to measure currents and voltages to 7 significant figures) the errors on the resistance were negligible ( ±0.003% of the value of the resistance) and so can not be seen in figure 19. The same is true of the errors on the temperature. Assuming that equation 7 is correct then with a  ±0.003% error on any calculated resistance, the temperature of the sample space should only have an error of  ±0.04K. Had each of the samples been perfectly pure their profiles would have a very sharp transition between the states and the transition temperature would be very clear. However as a result of the broadening of this transition due to the impurity of the samples a temperature could not be clearly defined. Had powerful enough graphing software been to hand and were the profile able to be fitted to any know curve on this software, the most reliable way to find the transition temperature would have been to plot the first derivative of resistivity with respect to temperature and then determine its maximum (corresponding to the point of inflection within the transition). This not being the case the temperature of the transition was approximated to be the temperature at the half way point in the drop between the two states. To ascertain at which points on the profile the change in state began and ended, separate lines of linear regression were fitted to the linear data in both the normal state and the superconducting state. These two lines of regression were extended closer and closer to the transition from either side until the adjusted R2 value of the lines of best fit was 0.999, which indicated an excellent linear fit. It was found upon inspection that the mid-point of the transition could be defined in two different ways; the mid point in resistivity and the mid point in temperature (the mid-point in resistivity obviously corresponding to slightly a different temperature than that found at the mid point of the temperature). This was due to a slight skew in the transition in the profile and so in order to clearly define the superconducting transition temperature a clearer approximation from the one stated before had to be made. It was therefore approximated that the temperature corresponding to the mid point in resistivity should be averaged with the mid point in temperature on the x-axis and the error be the temperature either side of this average value which either previous mid value lay. This can be seen more clearly in figure 20. Figure 20: Shows the method used to calculate the superconducting transition temperature using an expanded view of the first profile in figure 19. Lines of linear regression are shown in black either side of the area in which the transition occurs (in yellow). Both temperatures can be seen highlighted by dashed lines. By the use of this method it was determined that the transition temperatures for both of the profiles in figure 19 were 87.6( ±0.9)K and 86.0( ±0.4)K for the first and second profiles respectively. Although these do not agree with each other (within the confines set by the errors) an average was taken and found to be 86.8( ±0.8)K. The purity indicator was also calculated for each profile and found to be 0.116 and 0.104 respectively. These two values differ by approximately 10% which is reasonable considering that they are from the same sample. The resistivity profiles of the samples from batch two can be seen below in figure 21. No profiles of sample 1 could be obtained as it broke while being affixed to the probe due to its thinness. Each of the profiles shown in figure 21 displays linear behaviour in the normal state region as predicted. As stated before, lines of linear regression were fitted to the data in the normal state after the transition and reduced chi-squared tests were carried out all resulting in values between 610-5 and 210-5. Normally this might signify an over estimation of errors used with data in the trend. However, as these values were calculated without errors to begin with it merely shows that the sheer number of data points dements the result of any reasonable statistical test measuring goodness of fit. Adjusted R2 values were found to 0.999 or better with respect to a linear fit. All of the transition temperatures from the profiles in figure 21 and their indicators of sample purity were calculated in exactly the same way as before and can be seen in figure 22 Sample TC Error PI 2 88.1 0.3 0.079 88.1 0.4 0.067 89.1 0.6 0.090 3 88.3 0.1 0.070 87 1 0.109 4 87.9 0.4 0.073 88.0 0.2 0.085 88.1 0.4 0.075 85.97 0.04 0.043 It can be seen in figure 21 that of the profiles from sample 2, two follow an almost identical path while in the normal state the third and final profile remains approximately 0.014ÃŽÂ ©mm greater than the previous two at all times. This difference is also reflected in figure 22, where the transition temperature of the final profile can be seen to be 1K greater than the other two. This could be attributed to flaking contacts on the four point probe. Due to thermal shock over time in the form of the cryostat heating and cooling, the silver paint which held the four point probe contacts in place on the sample would sometimes flake slightly, thus increasing the perceived resistance and hence the resistivity of the sample. This theory is further confirmed by what look like rogue data points on the normal state side of the transition giving the graph a set of what look like spikes away from a linear fit. The first profile from sample 3 cut out early due to a technical fault. It can also be seen that the differences between the first and second profiles are similar to those in sample 2. This is also likely to have been caused by loose connections or flaking lending a greater resistance to the second profile. This is confirmed again by a spiking character towards the higher temperature end of the normal state trend. The profiles of sample 4 group well both in the linear, normal state region and, as shown in figure 22, in their calculated value of the transition temperature. There is, however, a large discrepancy between the resistivity of sample 4 and that of samples 2 and 3; almost double at all points in the normal region. One possible explanation for this could be the known difference in the values of resistivity along the a or b and the c axis of the unit cell of YBCO. This leads to the idea that one sample may have unit cells aligned in a different orientation to the other two when current is passed through the sample. Literature values of this ratio range from 30 to 150[xvi] rendering this hypothesis highly unlikely. A large source of error was that of the separations of the contacts of the four point probe through to the calculation of the resistivity. This enormous error on the value of r (in equation 8) gives each value of the resistivity an error of 20% of its own value (these have not been included in figure 21 as due to the large number of data points all of the errors from each profile blend into one another making it extremely difficult to determine which set of errors belong to which data set). If this error was not included then the error on the resistivity, just as that on the resistance, would be negligibly small. 20% seems far too large and could be a result of various factors. The first of these is the large spread and small number of measurements taken of the separations which itself results from two factors. The first being that the separations were measured by eye using vernier callipers; had a travelling microscope been used then much more accurate measurements could have been t aken. The second is that as the contact wires themselves were not straight, a linear distance between them was difficult to establish (photo in figure 23). It could also have been that the separation of the wires was measured from the wrong points on the sample. This is explained using figure 23. Figure 23: A photo of a sample and four point probe with a simplified cross section of it to demonstrate current movement through the sample both in the superconducting state (green) and the normal state (red). The separations of the contacts of the four point probe were measured from the centres of the wires. It can be seen in figure 23 that this is a reasonable approximation while the sample is in the superconducting state; the current will work to minimise the distance it travels through the conductive silver paint due to its higher conductivity. However, in the normal state this is reversed and the current will work to minimise the distance travelled through the sample. Therefore it may have been a better idea to measure the contact separations from the edge of the silver paint. This effect could be negated if thin film materials were used. However, these have different physical properties to bulk materials and so would change the purpose of the investigation altogether. As can be seen in figure 22, all of the samples from batch two had similar levels of purity (at least all of the same order of magnitude). The transition temperatures for batch two were averaged and found to be 87.8( ±0.4)K. This agreed with the transition temperature calculated for batch one which is logical as the XRD data indicates that they are extremely similar compounds. Although this might be the case it would make no sense to average the transition temperatures of both batches even by weight due to the current inability to ensure that both batches contain the same oxygen content (the overriding factor determining transition temperature). In an attempt to determine the oxygen content of the two batches of YBCO a phase diagram such as the one shown in figure 24 was used. In this way the oxygen contents of batches one and two were determined to be 6.82( ±0.01) and 6.83( ±0.01) respectively. This again supports the XRD data plotted in figure 14 in confirming the similarity in compound and structure. When trying to compare the two transition temperatures calculated (86.8( ±0.8)K and 87.8( ±0.4)K) to literature data two major problems are encountered. The first is that this report has determined the oxygen content of the samples using their calculated transition temperature. Therefore it seems somewhat counterintuitive to find a reference transition temperature based upon an oxygen content which has been determined from the transition temperature itself. It is, however, widely established that the upper bound of the superconducting transition temperature of YBCO is 95K which the data determined here fits. IV.I Specific Heat Capacity Methods In order to obtain a value of the superconducting transition temperature independent of the electrical properties of the samples, two similar methods of measuring how its specific heat capacity varied as a function of temperature were employed. For both methods a sample of YBCO was suspended at the base of a cryostat probe using thermally and conductively insulating dental floss. Two opposing sides of the sample were then coated in a thermally conductive heat sink paste. On one of these sides a strain gauge was placed and on the other a platinum thermometer. These were secured using a layer of thermally insulating varnish. Both gauge and thermometer were attached to an ammeter, voltmeter (Keithley 2000 DMMs) and current source using four point resistance set ups as can be seen below in figure 25. Figure 25: Cross sectional view of a sample prepared for specific heat capacity measurements. The sample is shown in dark grey, the heat sink paste in light grey, varnish in brown, platinum thermometer in blue and the strain gauge in red and yellow. The first method involved placing this probe in an airtight sample space within a vat of liquid nitrogen. The space was filled with helium until the sample had reached the same temperature as the liquid nitrogen and then vacuumed to a pressure of 2.2010-6 Torr . A constant current was then applied to the strain gauge causing it to heat. All of the variables were recorded using a LabView program in appendix 2. Two assumptions were made during this experiment. The first was that all of the electrical power going into the strain gauge was transferred into thermal energy, and the second was that all of this thermal energy was then conducted through the sample and detected by the platinum thermometer (equation 9 where Q is the energy supplied, tf is the final time, m is the mass of the sample, ΆT is the change in temperature and the rest of the variables are as before in this report). Q=0tfIVdt=mCVΆT (9)[xvii] The second method took the sample set up as shown in figure 25 and placed it in a cryostat in which the temperature could be controlled. Random bursts of power were applied to the strain gauge (as opposed to a continuous flow) at a range of different temperatures and the corresponding rise in temperature was recorded with the same program as was used for the first method. The LabView program used to record data for this experiment for this experiment can also be found in appendix 2. IV.II Specific Heat Capacity Results, Analysis and Interpretation The results of the first of the two experiments to measure the heat capacity of the sample from the first batch can be seen below in figure 26. An initial static temperature of approximately 74K was recorded which at first appeared to be an equipment failure as liquid nitrogen boils at approximately 77K (at atmospheric pressure). This could have been explained if the liquid nitrogen had been kept under pressure within the vat. However, considering that the sample space was able to be inserted into the top of the vat freely this was not the case; the liquid nitrogen must have been at atmospheric pressure. The platinum thermometer was even retested within a standard cryostat and agreed with equation 8. Due to only the cumulative energy and temperature being recorded a small but not unreasonable assumption had to be made when calculating the specific heat capacity. It was assumed that at any given point the total energy supplied up until that point would result in the temperature measured whether the sample had been cooled between measurements or heated continuously. The initial drop in heat capacity can be attributed to the time taken for the initial thermal energy supplied by the strain gauge to travel through the bulk of the sample to the platinum thermometer. This trend can be seen to continue until approximately 76K. At a temperature of just over 86K the specific heat capacity of the sample seemed to gain an almost exponential character. It has been suggested that this was due to ineffective thermal insulation of the sample in the sample space, and instead of the thermal energy being transferred through the sample it was transferred to the liquid nitrogen through convection/radiation. Convection to the sample space wall seems unlikely considering the very low pressure within the sample space and radiation would not be an efficient enough transfer process to account for this large and constant drain of thermal energy from the experimental system. Enlarging the area of the graph around the previously found transition temperature a very small discontinuity can clearly be seen just before the large amount of noise in the upper, normal state, heating regions. This can be seen in figure 27. When compared to figure 5 in section I it can be seen that this discontinuity, although not as sharp, matches the shape expected in theory; a sharp jump followed by a change in the character of the heat capacity. Even with errors set by the trend described by equation 7 ( ±0.04K), this discontinuity still distorts the general trend to an extent that could not be explained by a singular anomaly. The peak of this sharp discontinuity in heat capacity was found to occur at a temperature of 86.1( ±0.1)K, which agrees with the transition temperature for the first batch found in the resistivity experiment of 86.8( ±0.8)K, albeit at the extremes of the errors. Due to time constraints only two further experiments were run in an attempt to try and replicate these initial results but both failed to do so. The first was repeated with a current within error of that used in the first experiment and yielded the results seen in figure 28. Although it can be seen that this experiment, like the first, gives rise to an almost linear rise in temperature at around 86K it peaks just before whereas the first experiment peaked just after 86K. Also, a discontinuity can be seen within this curve, similar to the first experiment. The main differences start to occur when the discontinuity is seen in more detail. Instead of a continuing change in specific heat capacity such as that seen in figure 27, this discontinuity has a shape that is more reminiscent of a few rogue data points, possibly due to a loose wire in the platinum thermometer electronics. This can be seen more clearly in figure 29 below. The second of these two repeat experiments can be seen below in figure 30. Conducted with a slightly higher current, this trend continues past 86K and starts to rise just after 88K. There is no discontinuity in this trend, for several possible reasons. The first is that its specific heat capacity never reaches 17.30( ±0.03)Jg-1K-1 (the value of the specific heat capacity at the discontinuity found in figure 27). This is primarily a consequence of the experiment cutting out early due to a technical fault. The second is possibly that the discontinuity found in figure 27 is an anomaly and should not normally occur. The minimum points on each of figures 24, 26 and 28 were determined and found to be 2.31( ±0.03)Jg-1K-1, 2.13( ±0.03)Jg-1K-1 and 2.82( ±0.03)Jg-1K-1 respectively. Although not agreeing within error, these values do lie in the same order of magnitude and so could be different due to a slight change in environmental conditions (e.g. concentration of helium/air inside the sample space). The only reference data found on this topic combined the normal and superconducting heat capacities using equation 4. This assumes that the phonon contribution to the normal state heat capacity is equal to zero, which is clearly not the case in the high temperature superconductors and so cannot be compared here. Only one set of data was able to be taken for the second heat capacity experiment and the results of this can be seen below in figure 31. Although the initial heating spike seen in each of the previous heat capacity experiments results can be seen it lacks any other similar characteristics. The trend moves straight past the region in which the transition temperature was thought to lie with no visible effect on the results. This could be due to several reasons. The first could be that not enough data points have been collected and any discontinuity was simply missed. The second and much more likely reason could be that in an attempt to hold the cryostat at a constant higher temperature all of the liquid nitrogen boiled off and thermal energy from the samples surrounding caused it to heat instead of the energy supplied from the strain gauge. This was a problem with the design of the experiment It is for the very reason stated above that the minimum specific heat capacity achieved in the second heat capacity experiment could not be measured. External heating simply made the results obtained from this method far too unreliable to use in any reliable data analysis. V Conclusions The calibration of the furnace gave results instrumental in the successful fabrication of YBCO and the data obtained from the calibration of the Pt100 allowed the temperature of the sample space to be measured to a much greater accuracy during experiments. The relationship of the Pt100 platinum thermometers resistance and its temperature was also shown to differ from reference data very little. It can be concluded that YBCO was successfully fabricated within the laboratory. This was confirmed by X-ray diffraction data. If the fabrication were to be redone it would be useful to identify a way in which the actual amount of oxygen absorbed into the compound could be measured. It is by this method that the actual compound fabricated could be identified and compared to reference data as opposed to having to use experimentally determined transition temperatures to work backwards using phase diagrams. It would also be beneficial to create several more samples of varying oxygen content in order to gather a wider range of profiles and transition temperatures. It would also be extremely beneficial to fabricate the compound in a much cleaner environment e.g. a clean room instead of a fume cupboard. This would not only give a much sharper transition due to a greater level of purity but would also remove much of the background noise in the XRD data making it much easier to compare to re ference data. It can also be concluded that as a result of the successful resistivity profiling of the samples, the transition temperatures of each of the batches were found to be 86.8( ±0.8)K and 87.8( ±0.4)K. These results were then used to determine the oxygen contents of the fabricated samples and found to be 6.82( ±0.01) and 6.83( ±0.01). The profiles also all showed excellent linearity in the normal states regions as predicted by theory. If this experiment was to be repeated a much more accurate way of measuring the four point probe contact separation would be determined to reduce the overly large error propagated through into the values of the resistivity. The experiments concerning the specific heat capacity of the compound gave mixed results in terms of usable data. The primary data gathered in the first experiment seemed to confirm the transition temperature found in the resistivity experiment with a discontinuity at 86.1( ±0.1)K but these results could then not be reproduced calling into question the accuracy of the experiment. However, measurements of the specific heat capacities at minimum points seemed to agree within magnitude although not within the small errors set. The second experiment failed completely to produce any sort of usable data due to poor experimental design. It was also found that the specific heat capacity of the normal state should have been recorded in order to produce any results comparable to reference literature. With all of this taken into consideration it can still be seen that the main difficulty with these experiments was that the oxygen content of the samples was unknown. This made it almost impossible to compare the transition temperatures found to any sort of reference data. Acknowledgements Prof. Damian Hampshire Mr. Mark Raine Mr. Gary Oswald Miss. L Falk Mrs. S Jowitt 19 January 2010Page 14 of 14Josephine Butler College References [i] Theory of Superconductivity, J.R.Schrieffer, Perseus Books (124), page 1 [ii] Introduction to Solid State Physics 8th Edition, C. Kittel, John Wiley Sons, Inc (2005), page 259 [iii] https://nobelprize.org/nobel_prizes/physics/laureates/1913 [iv] Possible high TC superconductivity in the Ba-La-Cu-O system, J.G. Bednorz and K.A. MÃÆ' ¼ller (1986) [v] https://nobelprize.org/nobel_prizes/physics/laureates/1987 [vi] Superconductivity, C. P. Poole, Jr., H. A. Farach R. J. Creswick, Academic Press (1995), page 40 [vii] Superconductivity, E.A. Lynton, Methuen Co Ltd (1962), page 75 [viii] Superconductivity: Volume I, R.D. Parks, Marcel Dekker Inc (1969), page 76 [ix] Superconductivity: Volume I, R.D. Parks, Marcel Dekker Inc (1969), page 6 [x] Superconductivity, C. P. Poole, Jr., H. A. Farach R. J. Creswick, Academic Press (1995), page 96 [xi] Superconductivity, C. P. Poole, Jr., H. A. Farach R. J. Creswick, Academic Press (1995), page 36 [xii] High-Temperature Superconductivity in Curpates, A. Mourachkine, Kluwer Academic Publishers (2002), page 40 [xiii] R. Swarup, A. K. Gupta and M. C. Bansal (1995). Effect of sample density on magnetic penetration depth in YBaCuO ceramic superconductors. Journal of Superconductivity 8 (3): 361-364 [xiv] https://docs-europe.electrocomponents.com/webdocs/0c41/0900766b80c41b6b.pdf [xv] Semiconductor material and Device Characterization, Schroder D.K., John Wiley Sons Inc (1990), page 4 [xvi] Superconductivity, C. P. Poole, Jr., H. A. Farach R. J. Creswick, Academic Press (1995), page 28-29 [xvii] Physics for Scientists and Engineers, Sixth Edition, Tipler P.A. G. Mosca, W.H. Freeman and Company (2008), page 687 and 875