Electronic Data Interchange (EDI) is the computer-to-computer exchange of business documents in a standard electronic format between business partners. It is a concept of business documents exchange between counterparts through the intermediary “the centralized service of document transfer”. It describes arrangement methodology of “communication” of counterparts’ information systems by means of electronic messages without human input.
By moving from a paper-based exchange of business document to one that is electronic, businesses enjoy major benefits such as reduced cost, increased processing speed, reduced errors and improved relationships with business partners.
Each term in the definition is significant:
Computer-to-computer– EDI replaces postal mail, fax and email. While email is also an electronic approach, the documents exchanged via email must still be handled by people rather than computers. Having people involved slows down the processing of the documents and also introduces errors. Instead, EDI documents can flow straight through to the appropriate application on the receiver’s computer (e.g., the Order Management System) and processing can begin immediately.
Business documents – These are any of the documents that are typically exchanged between businesses. The most common documents exchanged via EDI are purchase orders, invoices and advance ship notices. But there are many, many others such as bill of lading, customs documents, inventory documents, shipping status documents and payment documents.
Standard format– Because EDI documents must be processed by computers rather than humans, a standard format must be used so that the computer will be able to read and understand the documents. A standard format describes what each piece of information is and in what format (e.g., integer, decimal, mmddyy). Without a standard format, each company would send documents using its company-specific format and, much as an English-speaking person probably doesn’t understand Japanese, the receiver’s computer system doesn’t understand the company-specific format of the sender’s format.
- There are several EDI standards in use today, including ANSI, EDIFACT, TRADACOMS and ebXML. And, for each standard there are many different versions, e.g., ANSI 5010 or EDIFACT version D12, Release A. When two businesses decide to exchange EDI documents, they must agree on the specific EDI standard and version.
- Businesses typically use an EDI translator – either as in-house software or via an EDI service provider – to translate the EDI format so the data can be used by their internal applications and thus enable straight through processing of documents.
Business partners – The exchange of EDI documents is typically between two different companies, referred to as business partners or trading partners. For example, Company A may buy goods from Company B. Company A sends orders to Company B. Company A and Company B are business partners.
The main objective of EDI is to replace exchange of paper documents by electronic document flow between information systems. In addition:
- To replace information exchange on papers
- To standardize and unify data
- To integrate data processing in IS
- To reduce percentage of manual labour
- To increase speed and accuracy of data collection
- To guarantee efficient delivery of data
- To provide necessary control, management and legalization of information flows
- To guarantee information security
Main functions of EDI:
- Search of goods and suppliers, information viewing, comparing prices, keeping ratings
- Forming, saving and sending EDI documents
- Signing documents with EDS
- Integration with registration systems (for example, with 1C or SAP)
Advantages of implementation:
- Increase of commodity turnover due to decreasing time of goods stockout
- Reducing costs of expendables due to lowering the expenditure of paper, cartridges, service of office equipment, etc.
- Expenses on telephone and fax would be cut down
- Working hours of the involved staff will be reduced due to reducing manual labor
- Quantity of errors in manual data entry decreases due to lowering human factor and refusing of duplicating/data transfer
E-commerce basically is the use of Internet to transact business and digitally enabled commercial transactions between and among organizations and individuals involving information systems under the control of the firm it takes the form of e-business. It is a subset of e-business, is the purchasing, selling, and exchanging of goods and services over computer networks (such as the Internet) through which transactions or terms of sale are performed electronically. Contrary to popular belief, ecommerce is not just on the Web. In fact, ecommerce was alive and well in business to business transactions before the Web back in the 70s via EDI (Electronic Data Interchange) through VANs (Value-Added Networks). Ecommerce can be broken into three main categories: B2B, B2C, and C2C.
B2B (Business-to-Businesss): Companies doing business with each other such as manufacturers selling to distributors and wholesalers selling to retailers. Pricing is based on quantity of order and is often negotiable. This the largest form of e-commerce involving business of billions of dollars. In this form, the buyers and sellers are both business entities and do not involve an individual consumer. E.g. ChemConnect, Grainger.
B2C (Business-to-Consumer): This is one of the most common e-commerce segments of today. As the name suggests, this model involves businesses and consumers. In this model, online businesses sell to individual consumers. The basic concept behind this type is that the online retailers and marketers can sell their products to the online consumer by using crystal clear data which is made available via various online marketing tools. E.g. BarnesandNoble.com
C2C (Consumer-to-Consumer): This system facilitates the online transaction of goods or services between two people. There are many sites offering free classifieds, auctions, and forums where individuals can buy and sell through online payment systems like PayPal where people can send and receive money online with ease. eBay’s auction service is a great example of where person-to-person transactions take place every day since 1995.
Other categories of e-commerce:
Companies using internal networks to offer their employees products and services online–not necessarily online on the Web–are engaging in B2E (Business-to-Employee) ecommerce.
G2G (Government-to-Government), G2E (Government-to-Employee), G2B (Government-to-Business), B2G (Business-to-Government), G2C (Government-to-Citizen), C2G (Citizen-to-Government) are other forms of ecommerce that involve transactions with the government–from procurement to filing taxes to business registrations to renewing licenses.
1.Data duplication: When files are duplicated and held in a number of locations situations can arise that will cause data to be inconsistent.
- Corrections or modifications made in one location may not be updated in another. For example, customer address files held by the Accounts Department may be updated while those held by Sales are not updated. For the customer this may mean that the account arrives but the goods do not.
- Modifications made to data files may also lead to less obvious discrepancies. For example a suburb name may be spelt differently in two locations e.g. Allambie, Allamby. A report generated calculating sales to customers by suburb may then include the same customers twice. This may not be obvious if the report is a summary style report.
2. Poor data control: File systems have no centralized control of the data descriptions. Tables and field names may be used in different locations to mean different things. For example, the Sales department’s files may list a customer as having a single Name field that is made up of customers Initial and last name e.g. I Smith. The Accounts department may keep the customer’s name in three separate fields; First name, Initial, Last Name. This may make it difficult to compare the data in the two files or at least require additional time in programming the comparison.
3. Inadequate data manipulation capabilities: Data in traditional file systems is not easily related, particularly if the files have been developed for separate purposes. If the organization requires information to be generated that accesses data from several unrelated files the task may prove difficult or require re-entry of data. For example, in a library the catalogue of books may be held in one file. Books on order for the library may be held in another file. When books are received the catalogue will need to be manually updated if the two files are not related.
4. Program data dependence: File data is stored within each of the applications that use that data e.g., A sales transaction program may have several files relevant to it, Customer, Stock_in_hand, Sale_Info. These files are integrated into the program.
5. Limited data sharing: This dependence of the data on the program means that the files are not necessarily suitable for a new program that is being developed. The new program may need its data in another form or require additional data that is not held.
6. Lengthy development times: Each new application requires development of the program along with the development of the relevant files for that application. Although the data may be held elsewhere in the organization the data will need to be imported or re-entered into the new files. This takes time. As organizations grow and change they need to change their internal applications quickly to meet new demands. Lengthy development times are a disadvantage.
7. Program maintenance: File maintenance can be time consuming in traditional file processing systems. Changes to files mean changes to application programs.
Big data is a term that describes the large volume of data – both structured and unstructured – that inundates a business on a day-to-day basis. But it’s not the amount of data that’s important. It’s what organizations do with the data that matters. Big data can be analyzed for insights that lead to better decisions and strategic business moves.
While the term “big data” is relatively new, the act of gathering and storing large amounts of information for eventual analysis is ages old. The concept gained momentum in the early 2000s when industry analyst Doug Laney articulated the now-mainstream definition of big data as the three Vs:
Volume: Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a problem – but new technologies (such as Hadoop) have eased the burden.
Velocity: Data streams in at an unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time.
Variety: Data comes in all types of formats – from structured, numeric data in traditional databases to unstructured text documents, email, video, audio, stock ticker data and financial transactions.
The importance of big data doesn’t revolve around how much data you have, but what you do with it. You can take data from any source and analyze it to find answers that enable: 1) cost reductions, 2) time reductions, 3) new product development and optimized offerings, and 4) smart decision making.
How Big Data Analytics work: Companies start by identifying significant business opportunities that may be enhanced by superior data and then determine whether Big Data Analytics solutions are needed. If they are, the business will need to develop the hardware, software and talent required to capitalize on Big Data Analytics. That often requires the addition of data scientists who are skilled in asking the right questions, identifying cost-effective information sources, finding true patterns of causality and translating analytic insights into actionable business information.
To apply Big Data Analytics, companies should:
- Select a pilot (a business unit or functional group) with meaningful opportunities to capitalize on Big Data Analytics
- Establish a leadership group and team of data scientists with the skills and resources necessary to drive the effort successfully
- Identify specific decisions and actions that can be improved
- Determine the most appropriate hardware and software solutions for the targeted decisions
- Decide whether to purchase or rent the system
- Establish guiding principles such as data privacy and security policies
- Test, learn, share and refine
- Develop repeatable models and expand applications to additional business areas
Companies use Big Data Analytics to:
- Improve internal processes, such as risk management, Customer Relationship Management, supply chain logistics or Web content optimization
- Improve existing products and services
- Develop new product and service offerings
- Better target their offerings to their customers
- Transform the overall business model to capitalize on real-time information and feedback
One of the tools for Managing Big Data is Hadoop is an open source, Java-based programming framework that supports the processing and storage of extremely large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation.
Apache Hadoop: An open-source software that allows to store and process large amounts of data across clusters of computers, systems and files, Apache™ Hadoop® provides the tools for extracting intelligence from data through analysis and visualization.
Extensible Markup Language (XML) is used to describe data. The XML standard is a flexible way to create information formats and electronically share structured data via the public Internet, as well as via corporate networks.
XML code, a formal recommendation from the World Wide Web Consortium (W3C), is similar to Hypertext Markup Language (HTML). Both XML and HTML contain markup symbols to describe page or file contents. HTML code describes Web page content (mainly text and graphic images) only in terms of how it is to be displayed and interacted with.
XML data is known as self-describing or self-defining, meaning that the structure of the data is embedded with the data, thus when the data arrives there is no need to pre-build the structure to store the data; it is dynamically understood within the XML. The XML format can be used by any individual or group of individuals or companies that want to share information in a consistent way. XML is actually a simpler and easier-to-use subset of the Standard Generalized Markup Language (SGML), which is the standard to create a document structure.
The basic building block of an XML document is an element, defined by tags. An element has a beginning and an ending tag. All elements in an XML document are contained in an outermost element known as the root element. XML can also support nested elements, or elements within elements. This ability allows XML to support hierarchical structures. Element names describe the content of the element, and the structure describes the relationship between the elements.
An XML document is considered to be “well formed” (that is, able to be read and understood by an XML parser) if its format complies with the XML specification, if it is properly marked up, and if elements are properly nested. XML also supports the ability to define attributes for elements and describe characteristics of the elements in the beginning tag of an element.
Applications for XML are endless. For example, computer makers might agree upon a standard or common way to describe the information about a computer product (processor speed, memory size, and so forth) and then describe the product information format with XML code. Such a standard way of describing data would enable a user to send an intelligent agent (a program) to each computer maker’s Web site, gather data, and then make a valid comparison.
XML’s benefits sometimes appeared revolutionary in scope shortly after it was introduced. However, as a concept, it fell short of being revolutionary. It also fell short of being the panacea. The over-application of XML in so many areas of technology diminished its real value, and results in a great deal of unnecessary confusion. Perhaps most damaging is the predictable behavior of many vendors that look to recast XML using their own set of proprietary extensions. Although some want to add value to XML, others seek only to lock in users to their products.
XML’s power resides in its simplicity. It can take large chunks of information and consolidate them into an XML document ‑ meaningful pieces that provide structure and organization to the information.