The Next Step In Internet’s Development For All

SEMANTIC INTERNET

 

A great deal of data online has actually been posted in a form that that makes it simple for us human customers to absorb. Simply consider weather reports, trip timetables or company news. Semantic Internet aims to make this info additionally understandable by computer systems so that they could act upon that info.

Performing on the details can not be simply an assumption yet it have to rely upon a non-ambiguous description of material and its connection to various other pieces of info.

SOURCE DESCRIPTION STRUCTURE

RDF is a data version which the Internet Consortium (W3C) defined in 1999. The data design is straightforward yet highly effective enough to show residential properties of Internet based objects or to reveal homes of anything which could have a URI address. RDF uses triples which identify a resource, a home related to the source and the value of this home. The worth can consequently be an additional source, which allows RDF descriptions to encode easy and intricate relationships relying on the domain name.

RDF alone would work if everyone used exact same URIs for the exact same things and the same residential properties. Now that this is not the instance, reconciliation of RDF data versions from several sources can be recognized with the help of ontologies and policies in basic. An ontology is a formal description of principles and properties made use of in semantic metadata descriptions however not just that.

Ontologies offer us with primitives that help us find implicit relationships.

INTERNET ONTOLOGY LANGUAGE (OWL).

OWL supplies confirmed primitives to create more RDF from your RDF. Simply puts you can develop more worth from your existing info assets with the help of an OWL appropriate inference engine.

For recognizing the Semantic Web vision and making it possible for search engines to provide even more precise search results, the W3C has actually additionally specified RDFa modern technology for embedding RDF descriptions inside HTML papers. This allows companies to become a lot more reachable through search engines and with a limited cost when their Web Material Management software program could embed abundant metadata inside the recently presentation-rich and semantically-poor web pages.

SPARQL QUESTION LANGUAGE.

SPARQL originates from SPARQL Protocol and RDF Question Language. Similar to the job SQL has played in the progression of relational algebra and its execution in relational databases, SPARQL gives a common accessibility interface for repositories that are either native RDF data sources or have actually developed a SPARQL endpoint on a non-RDF database.

The W3C has actually defined SPARQL in three components:.

1) the question language with its semantics.
2) an encoding for ultimate outcomes for questions and.
3) a binding to the HTTP method for bring these payloads in between computers online.

 

SEMANTIC INTERNET

    Passive Income At Your Finger Tips

  • Tube Sniper Pro Software 2.0 New Video Marketing Technology Exposes Your Competitors Weaknesses and Reveals Profitable Niches in 90 seconds or less... ...Plus it automatically tracks your videos rankings AND sends High PageRank Backlinks to your site all from within one central dash

Semantic Web From Wikipedia

Semantic Web

From Wikipedia, the free encyclopedia
  (Redirected from Semantic web)

W3C’s Semantic Web logo

The Semantic Web is a collaborative movement led by international standards body the World Wide Web Consortium (W3C).[1] The standard promotes common data formats on the World Wide Web. By encouraging the inclusion of semanticcontent in web pages, the Semantic Web aims at converting the current web, dominated by unstructured and semi-structured documents into a “web of data”. The Semantic Web stack builds on the W3C’s Resource Description Framework (RDF).[2]

According to the W3C, “The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries”.[2] The term was coined by Tim Berners-Lee for a web of data that can be processed by machines.[3]

While its critics have questioned its feasibility, proponents argue that applications in industry, biology and human sciences research have already proven the validity of the original concept. Scholars have explored the social potential of the semantic web in the business and health sectors, and for social networking.[4]

The original 2001 Scientific American article by Berners-Lee, Hendler, and Lassila described an expected evolution of the existing Web to a Semantic Web,[5] but this has yet to happen. In 2006, Berners-Lee and colleagues stated that: “This simple idea…remains largely unrealized”.[6]

 

 

History[edit]

The concept of the Semantic Network Model was formed in the early 1960s by the cognitive scientist Allan M. Collins, linguist M. Ross Quillian and psychologist Elizabeth F. Loftus in various publications,[7][8][9][10][11] as a form to represent semantically structured knowledge. It extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other, enabling automated agents to access the Web more intelligently and perform tasks on behalf of users. The term “Semantic Web” was coined by Tim Berners-Lee,[3] the inventor of the World Wide Web and director of the World Wide Web Consortium (“W3C“), which oversees the development of proposed Semantic Web standards. He defines the Semantic Web as “a web of data that can be processed directly and indirectly by machines”.

Many of the technologies proposed by the W3C already existed before they were positioned under the W3C umbrella. These are used in various contexts, particularly those dealing with information that encompasses a limited and defined domain, and where sharing data is a common necessity, such as scientific research or data exchange among businesses. In addition, other technologies with similar goals have emerged, such as microformats.

Purpose[edit]

The main purpose of the Semantic Web is driving the evolution of the current Web by enabling users to find, share, and combine information more easily. Humans are capable of using the Web to carry out tasks such as finding the Estonian translation for “twelve months”, reserving a library book, and searching for the lowest price for a DVD. However, machinescannot accomplish all of these tasks without human direction, because web pages are designed to be read by people, not machines. The semantic web is a vision of information that can be readily interpreted by machines, so machines can perform more of the tedious work involved in finding, combining, and acting upon information on the web.

The Semantic Web, as originally envisioned, is a system that enables machines to “understand” and respond to complex human requests based on their meaning. Such an “understanding” requires that the relevant information sources be semantically structured.

Tim Berners-Lee originally expressed the vision of the Semantic Web as follows:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A “Semantic Web”, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The “intelligent agents” people have touted for ages will finally materialize.[12]

The Semantic Web is regarded as an integrator across different content, information applications and systems. It has applications in publishing, blogging, and many other areas.

Often the terms “semantics“, “metadata“, “ontologies“, and “Semantic Web” are used inconsistently. In particular, these terms are used as everyday terminology by researchers and practitioners, spanning a vast landscape of different fields, technologies, concepts and application areas. Furthermore, there is confusion with regard to the current status of the enabling technologies envisioned to realize the Semantic Web. Gerber, Barnard, and Van der Merwe chart the Semantic Web landscape and provide a brief summary of related terms and enabling technologies in a paper.[13] The architectural model proposed by Tim Berners-Lee is used as basis to present a status model that reflects current and emerging technologies.[14]

Limitations of HTML[edit]

Many files on a typical computer can also be loosely divided into human readable documents and machine readable data. Documents like mail messages, reports, and brochures are read by humans. Data, like calendars, addressbooks, playlists, and spreadsheets are presented using an application program which lets them be viewed, searched and combined.

Currently, the World Wide Web is based mainly on documents written in Hypertext Markup Language (HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags provide a method by which computers can categorise the content of web pages, for example:

<meta name="keywords" content="computing, computer studies, computer" />
<meta name="description" content="Cheap widgets for sale" />
<meta name="author" content="John Doe" />

With HTML and a tool to render it (perhaps web browser software, perhaps another user agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as “this document’s title is ‘Widget Superstore’”, but there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of €199, or that it is a consumer product. Rather, HTML can only say that the span of text “X586172″ is something that should be positioned near “Acme Gizmo” and “€199″, etc. There is no way to say “this is a catalog” or even to establish that “Acme Gizmo” is a kind of title or that “€199″ is a price. There is also no way to express that these pieces of information are bound together in describing a discrete item, distinct from other items perhaps listed on the page.

Semantic HTML refers to the traditional HTML practice of markup following intention, rather than specifying layout details directly. For example, the use of <em> denoting “emphasis” rather than <i>, which specifies italics. Layout details are left up to the browser, in combination with Cascading Style Sheets. But this practice falls short of specifying the semantics of objects such as items for sale or prices.

Microformats extend HTML syntax to create machine-readable semantic markup about objects including people, organisations, events and products.[15] Similar initiatives include RDFaMicrodata and Schema.org.

Semantic Web solutions[edit]

The Semantic Web takes the solution further. It involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and Extensible Markup Language (XML). HTML describes documents and the links between them. RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or airplane parts.

These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-accessible databases,[16] or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e., to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to human deductive reasoning and inference, thereby obtaining more meaningful results and helping computers to perform automated information gathering and research.

An example of a tag that would be used in a non-semantic web page:

<item>blog</item>

Encoding similar information in a semantic web page might look like this:

<item rdf:about="http://example.org/semantic-web/">Semantic Web</item>

Tim Berners-Lee calls the resulting network of Linked Data the Giant Global Graph, in contrast to the HTML-based World Wide Web. Berners-Lee posits that if the past was document sharing, the future is data sharing. His answer to the question of “how” provides three points of instruction. One, a URL should point to the data. Two, anyone accessing the URL should get data back. Three, relationships in the data should point to additional URLs with data.

Web 3.0[edit]

Main article: Web 3.0

Tim Berners-Lee has described the semantic web as a component of “Web 3.0″.[17]

People keep asking what Web 3.0 is. I think maybe when you’ve got an overlay of scalable vector graphics – everything rippling and folding and looking misty – on Web 2.0 and access to a semantic Web integrated across a huge space of data, you’ll have access to an unbelievable data resource …

—Tim Berners-Lee, 2006

“Semantic Web” is sometimes used as a synonym for “Web 3.0″,[18] though each term’s definition varies.

Challenges[edit]

Some of the challenges for the Semantic Web include vastness, vagueness, uncertainty, inconsistency, and deceit. Automated reasoning systems will have to deal with all of these issues in order to deliver on the promise of the Semantic Web.

  • Vastness: The World Wide Web contains many billions of pages. The SNOMED CT medical terminology ontologyalone contains 370,000 class names, and existing technology has not yet been able to eliminate all semantically duplicated terms. Any automated reasoning system will have to deal with truly huge inputs.
  • Vagueness: These are imprecise concepts like “young” or “tall”. This arises from the vagueness of user queries, of concepts represented by content providers, of matching query terms to provider terms and of trying to combine different knowledge bases with overlapping but subtly different concepts. Fuzzy logic is the most common technique for dealing with vagueness.
  • Uncertainty: These are precise concepts with uncertain values. For example, a patient might present a set of symptoms which correspond to a number of different distinct diagnoses each with a different probability. Probabilistic reasoning techniques are generally employed to address uncertainty.
  • Inconsistency: These are logical contradictions which will inevitably arise during the development of large ontologies, and when ontologies from separate sources are combined. Deductive reasoning fails catastrophically when faced with inconsistency, because “anything follows from a contradiction”Defeasible reasoning and paraconsistent reasoning are two techniques which can be employed to deal with inconsistency.
  • Deceit: This is when the producer of the information is intentionally misleading the consumer of the information.Cryptography techniques are currently utilized to alleviate this threat.

This list of challenges is illustrative rather than exhaustive, and it focuses on the challenges to the “unifying logic” and “proof” layers of the Semantic Web. The World Wide Web Consortium (W3C) Incubator Group for Uncertainty Reasoning for the World Wide Web (URW3-XG) final report lumps these problems together under the single heading of “uncertainty”. Many of the techniques mentioned here will require extensions to the Web Ontology Language (OWL) for example to annotate conditional probabilities. This is an area of active research.[19]

Standards[edit]

Standardization for Semantic Web in the context of Web 3.0 is under the care of W3C.[20]

Components[edit]

The term “Semantic Web” is often used more specifically to refer to the formats and technologies that enable it.[2] The collection, structuring and recovery of linked data are enabled by technologies that provide a formal description of concepts, terms, and relationships within a given knowledge domain. These technologies are specified as W3C standards and include:

The Semantic Web Stack illustrates the architecture of the Semantic Web. The functions and relationships of the components can be summarized as follows:[21]

  • XML provides an elemental syntax for content structure within documents, yet associates no semantics with the meaning of the content contained within. XML is not at present a necessary component of Semantic Web technologies in most cases, as alternative syntaxes exists, such as Turtle. Turtle is a de facto standard, but has not been through a formal standardization process.
  • XML Schema is a language for providing and restricting the structure and content of elements contained within XML documents.
  • RDF is a simple language for expressing data models, which refer to objects (“web resources“) and their relationships. An RDF-based model can be represented in a variety of syntaxes, e.g., RDF/XML, N3, Turtle, and RDFa.[22] RDF is a fundamental standard of the Semantic Web.[23][24][25]
  • RDF Schema extends RDF and is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes.
  • OWL adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. “exactly one”), equality, richer typing of properties, characteristics of properties (e.g. symmetry), and enumerated classes.
  • SPARQL is a protocol and query language for semantic web data sources.
  • RIF is the W3C Rule Interchange Format. It’s an XML language for expressing Web rules which computers can execute. RIF provides multiple versions, called dialects. It includes a RIF Basic Logic Dialect (RIF-BLD) and RIF Production Rules Dialect (RIF PRD).

Current state of standardization[edit]

Well-established standards:

Not yet fully realized:

  • Unifying Logic and Proof layers

The intent is to enhance the usability and usefulness of the Web and its interconnected resources through:

  • Servers which expose existing data systems using the RDF and SPARQL standards. Many converters to RDF exist from different applications. Relational databases are an important source. The semantic web server attaches to the existing system without affecting its operation.
  • Documents “marked up” with semantic information (an extension of the HTML <meta> tags used in today’s Web pages to supply information for Web search engines using web crawlers). This could be machine-understandable information about the human-understandable content of the document (such as the creator, title, description, etc.) or it could be purely metadata representing a set of facts (such as resources and services elsewhere on the site). Note that anythingthat can be identified with a Uniform Resource Identifier (URI) can be described, so the semantic web can reason about animals, people, places, ideas, etc. Semantic markup is often generated automatically, rather than manually.
  • Common metadata vocabularies (ontologies) and maps between vocabularies that allow document creators to know how to mark up their documents so that agents can use the information in the supplied metadata (so that Author in the sense of ‘the Author of the page’ won’t be confused with Author in the sense of a book that is the subject of a book review)
  • Automated agents to perform tasks for users of the semantic web using this data
  • Web-based services (often with agents of their own) to supply information specifically to agents, for example, a Trust service that an agent could ask if some online store has a history of poor service or spamming

Skeptical reactions[edit]

Practical feasibility[edit]

Critics (e.g., Which Semantic Web?) question the basic feasibility of a complete or even partial fulfillment of the semantic web. Cory Doctorow‘s critique (“metacrap“) is from the perspective of human behavior and personal preferences. For example, people may include spurious metadata into Web pages in an attempt to mislead Semantic Web engines that naively assume the metadata’s veracity. This phenomenon was well-known with metatags that fooled the AltaVista ranking algorithm into elevating the ranking of certain Web pages: the Google indexing engine specifically looks for such attempts at manipulation. Peter Gärdenfors and Timo Honkela point out that logic-based semantic web technologies cover only a fraction of the relevant phenomena related to semantics.[26][27]

Core, specialized communities and organizations for intra-company projects tended to practically adopt semantic web technologies greater than peripheral and less-specialized communities.[28] The practical constraints toward adoption have appeared less challenging where domain and scope is more limited than that of the general public and the World-Wide Web.[28]

Censorship and privacy[edit]

Enthusiasm about the semantic web could be tempered by concerns regarding censorship and privacy. For instance, text-analyzing techniques can now be easily bypassed by using other words, metaphors for instance, or by using images in place of words. An advanced implementation of the semantic web would make it much easier for governments to control the viewing and creation of online information, as this information would be much easier for an automated content-blocking machine to understand. In addition, the issue has also been raised that, with the use of FOAF files and geolocation meta-data, there would be very little anonymity associated with the authorship of articles on things such as a personal blog. Some of these concerns were addressed in the “Policy Aware Web” project[29] and is an active research and development topic.

Doubling output formats[edit]

Another criticism of the semantic web is that it would be much more time-consuming to create and publish content because there would need to be two formats for one piece of data: one for human viewing and one for machines. However, manyweb applications in development are addressing this issue by creating a machine-readable format upon the publishing of data or the request of a machine for such data. The development of microformats has been one reaction to this kind of criticism. Another argument in defense of the feasibility of semantic web is the likely falling price of human intelligence tasks in digital labor markets, such as Amazon‘s Mechanical Turk.

Specifications such as eRDF and RDFa allow arbitrary RDF data to be embedded in HTML pages. The GRDDL (Gleaning Resource Descriptions from Dialects of Language) mechanism allows existing material (including microformats) to be automatically interpreted as RDF, so publishers only need to use a single format, such as HTML.

Projects[edit]

This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (November 2011)

This section lists some of the many projects and tools that exist to create Semantic Web solutions.[30]

DBpedia[edit]

Main article: DBpedia

DBPedia is an effort to publish structured data extracted from Wikipedia: the data is published in RDF and made available on the Web for use under the GNU Free Documentation License, thus allowing Semantic Web agents to provide inferencing and advanced querying over the Wikipedia-derived dataset and facilitating interlinking, re-use and extension in other data-sources.[31]

FOAF[edit]

A popular vocabulary on the semantic web is Friend of a Friend (or FOAF), which uses RDF to describe the relationships people have to other people and the “things” around them. FOAF permits intelligent agents to make sense of the thousands of connections people have with each other, their jobs and the items important to their lives;[32] connections that may or may not be enumerated in searches using traditional web search engines. Because the connections are so vast in number, human interpretation of the information may not be the best way of analyzing them.

FOAF is an example of how the Semantic Web attempts to make use of the relationships within a social context.

SIOC[edit]

The Semantically-Interlinked Online Communities project (SIOC, pronounced “shock”) provides a vocabulary of terms and relationships that model web data spaces. Examples of such data spaces include, among others: discussion forums, blogs,blogrolls / feed subscriptions, mailing lists, shared bookmarks and image galleries.

GoPubMed[edit]

GoPubMed is a knowledge-based search engine for biomedical texts. The Gene Ontology (GO) and Medical Subject Headings (MeSH) serve as “Table of contents” in order to structure the millions of articles of the MEDLINE database.[33]The search engine allows its users to find relevant search results significantly faster than Pubmed.[citation needed]

eagle-i.net[edit]

eagle-i is an open source, semantic web platform for entering and publishing information about resources used in biomedical research.[34] The platform consists of the Semantic Web Entry and Editing Tool (SWEET), an RDF database, and a Search tool. All components of the eagle-i platform are driven by a central ontology to promote uniformity and interoperability with other platforms.[35][36] The eagle-i software, documentation, and information are accessible through Harvard medical school’s open.med website.[37] The eagle-i project started as a consortium of nine universities (Harvard,Oregon Health & Science UniversityDartmouthJackson StateMontana StateUniversity of Puerto RicoMorehouse CollegeUniversity of Alaska, and University of Hawaii), but is now being used by more than thirty universities.[38]

NextBio[edit]

A database consolidating high-throughput life sciences experimental data tagged and connected via biomedical ontologies.Nextbio is accessible via a search engine interface. Researchers can contribute their findings for incorporation to the database. The database currently supports gene expression or protein expression data and sequence centric data and is steadily expanding to support other biological data types.

See also[edit]

Book icon

References[edit]

  1. Jump up^ “XML and Semantic Web W3C Standards Timeline”. 2012-02-04.
  2. Jump up to:a b c “W3C Semantic Web Activity”World Wide Web Consortium (W3C). November 7, 2011. Retrieved November 26, 2011.
  3. Jump up to:a b Berners-Lee, Tim; James Hendler; Ora Lassila (May 17, 2001). “The Semantic Web”Scientific American Magazine. Retrieved March 26, 2008.
  4. Jump up^ Lee Feigenbaum (May 1, 2007). “The Semantic Web in Action”. Scientific American. Retrieved February 24, 2010.
  5. Jump up^ Berners-Lee, Tim (May 1, 2001). “The Semantic Web”. Scientific American. Retrieved March 13, 2008.
  6. Jump up^ Nigel Shadbolt, Wendy Hall, Tim Berners-Lee (2006).“The Semantic Web Revisited”IEEE Intelligent Systems. Retrieved April 13, 2007.
  7. Jump up^ Allan M. Collins; M. R. Quillian (1969). “Retrieval time from semantic memory”. Journal of verbal learning and verbal behavior 8 (2): 240–247. doi:10.1016/S0022-5371(69)80069-1.
  8. Jump up^ Allan M. Collins, A; M. Ross Quillian (1970). “Does category size affect categorization time?”. Journal of verbal learning and verbal behavior 9 (4): 432–438.doi:10.1016/S0022-5371(70)80084-6.
  9. Jump up^ Allan M. Collins, Allan M.; Elizabeth F. Loftus (1975). “A spreading-activation theory of semantic processing”.Psychological Review 82 (6): 407–428. doi:10.1037/0033-295X.82.6.407.
  10. Jump up^ Quillian, MR (1967). “Word concepts. A theory and simulation of some basic semantic capabilities”.Behavioral Science 12 (5): 410–430.doi:10.1002/bs.3830120511PMID 6059773.
  11. Jump up^ Semantic memory |book:Marvin Minsky (editor): Semantic information processing, MIT Press, Cambridge, Mass. 1988.
  12. Jump up^ Berners-Lee, Tim; Fischetti, Mark (1999). Weaving the WebHarperSanFrancisco. chapter 12. ISBN 978-0-06-251587-2.
  13. Jump up^ Gerber, AJ, Barnard, A & Van der Merwe, Alta (2006), “A Semantic Web Status Model, Integrated Design & Process Technology”, Special Issue: IDPT 2006
  14. Jump up^ Gerber, Aurona; Van der Merwe, Alta; Barnard, Andries; (2008), “A Functional Semantic Web architecture”, European Semantic Web Conference 2008, ESWC’08, Tenerife, June 2008.
  15. Jump up^ Allsopp, John (March 2007). Microformats: Empowering Your Markup for Web 2.0Friends of ED. p. 368.ISBN 978-1-59059-814-6.
  16. Jump up^ Artem Chebotko and Shiyong Lu, “Querying the Semantic Web: An Efficient Approach Using Relational Databases”, LAP Lambert Academic PublishingISBN 978-3-8383-0264-5, 2009.
  17. Jump up^ Victoria Shannon (June 26, 2006). “A ‘more revolutionary’ Web”International Herald Tribune. Retrieved May 24, 2006.
  18. Jump up^ Introducing The Concept of Web 3.0
  19. Jump up^ Lukasiewicz, Thomas; Umberto Straccia. “Managing uncertainty and vagueness in description logics for the Semantic Web”.
  20. Jump up^ Semantic Web Standards published by the W3C
  21. Jump up^ “OWL Web Ontology Language Overview”. World Wide Web Consortium (W3C). February 10, 2004. Retrieved November 26, 2011.
  22. Jump up^ “RDF tutorial”. Dr. Leslie Sikos. Retrieved 2011-07-05.
  23. Jump up^ “Resource Description Framework (RDF)”World Wide Web Consortium.
  24. Jump up^ “Standard websites”. Dr. Leslie Sikos. Retrieved 2011-07-05.
  25. Jump up^ Allemang, D., Hendler, J. (2011). “RDF –The basis of the Semantic Web. In: Semantic Web for the Working Ontologist (2nd Ed.)”. Morgan Kaufmann.doi:10.1016/B978-0-12-385965-5.10003-2.
  26. Jump up^ Gärdenfors, Peter (2004). “How to make the Semantic Web more semantic”. Formal Ontology in Information Systems: proceedings of the third international conference (FOIS-2004) (IOS Press). pp. 17–34.
  27. Jump up^ Timo Honkela, Ville Könönen, Tiina Lindh-Knuutila and Mari-Sanna Paukkeri (2008). “Simulating processes of concept formation and communication”Journal of Economic Methodology.
  28. Jump up to:a b Ivan Herman (2007). “State of the Semantic Web”.Semantic Days 2007. Retrieved July 26, 2007.
  29. Jump up^ “Policy Aware Web Project”. Policyawareweb.org. Retrieved 2013-06-14.
  30. Jump up^ See, for instance: Bergman, Michael K. “Sweet Tools”.AI3; Adaptive Information, Adaptive Innovation, Adaptive Infrastructure. Retrieved January 5, 2009.
  31. Jump up^ “wiki.dbpedia.org : About”. Dbpedia.org. 2013-05-08. Retrieved 2013-06-14.
  32. Jump up^ “FOAF”. semanticweb.org. Retrieved 2013-06-14.
  33. Jump up^ GoPubMed in a nutshell
  34. Jump up^ “eagle-i central search tool”. President and Fellows of Harvard College.
  35. Jump up^ “eagle-i resource ontology”. Google Code.
  36. Jump up^ Vasilevsky, N; Johnson, T; Corday, K; Torniai, C; Brush, M; Segerdell, E; Wilson, M; Shaffer, C; Robinson, D; Haendel, M (2012). “Research resources: curating the new eagle-i discovery system.”. Database : the journal of biological databases and curation 2012: bar067.doi:10.1093/database/bar067PMID 22434835.
  37. Jump up^ “eagle-i open source site”. open.med.
  38. Jump up^ “Participating eagle-i institutions”. eagle-i.net.
  • Roger Chaffin: “The concept of a semantic Relation”. In: Adrienne Lehrer u. a. (Hrsg.): Frames, Fields and contrasts. New essays in semantic and lexical organisation, Erlbaum, Hillsdale, N.J. 1992, ISBN 0-8058-1089-7, S. 253–288.
  • Hermann Helbig: Die semantische Struktur natürlicher Sprache. Wissenspräsentation mit MultiNet, Springer, Heidelberg 2001, ISBN 3-540-67784-4.
  • M. Ross Quillian: “Word concepts. A theory and simulation of some basic semantic capabilities”. In: Behavioral Science12 (1967), S. 410–430.
  • M. Ross Quillian: “Semantic memory”. In: Marvin Minsky (Hrsg.): Semantic information processing, MIT Press, Cambridge, Mass. 1988.
  • Klaus Reichenberger: Kompendium semantische Netze: Konzepte, Technologie, Modellierung, Springer, Heidelberg 2010, ISBN 3-642-04314-3.
  • John F. SowaPrinciples of semantic networks. Explorations in the representation of knowledge, Morgan Kaufmann, San Mateo, Cal. 1991, ISBN 1-55860-088-4.

Further reading[edit]

Cloud, Big Data and Cognitive Computing

IBM Invests in Chip Technologies Making a Strategic Play in Cloud Chess

NEW YORK (The Street) - IBM (IBM_) is pumping $3 billion into hardware innovation – trying to jump ahead of the pack in cloud and cognitive computing.

Serving as a testament to Big Blue’s commitment to get on the leading side of cloud innovation, the company announced it will invest $3 billion over the next five years in two broad research and early stage development programs for chip technology. The research focuses on making semiconductors more efficient for cloud computing and Big Data systems.

IBM is taking the first in a series of steps in changing up the hardware. The areas focused on in the funded studies include, carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and, no surprise here, cognitive computing. IBM is looking continuing to shrink the chips while making them more energy efficient as well as looking to switch to different technologies for semiconductors.

One of the research programs funded by the investment is “7 nanometer and beyond.” Looking to offer the smallest, IBM researchers and semiconductor experts are scaling today’s 22 nanometer chip down to eventually 7 nanometers by the end of the decade. IBM said its research will produce chips that are smaller and faster and use less power.

“This is another step we see in how we rethink computing systems, the $3 billion underscores our investment to remain a leader in high performance high-end storage and cognitive computing,” said Tom Rosamilia, Senior VP, IBM Systems & Technology Group in a phone interview with The Street. 

IBM is trying to beat out the competition in cloud and Big Data from the bottom up.

“Over the next ten years we will be developing fundamentally new systems no one has yet imagined,” Rosamilia stated. “[W]e believe that no other company can do this from the semiconductor all the way through the software stack.”

“This is a response to the shifts we are seeing in the industry in cloud,” said Rosamilia.

The research teams will comprise more than a thousand IBM Research scientists and engineers from around the world with teams in New York, California, and Zurich, Switzerland.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research, “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

The second research project is focused on developing alternative technologies for post-silicon era chips. IBM scientists and other experts say looking beyond silicon is necessary because of the physical limitations of those semiconductors.

Carbon nanotube transistors could be used to replace the transistors in chips that power IBM’s data-crunching servers, high performing computers and ultra-fast smart phones. These transistors can be 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology.

“As we look at this, the clock is really ticking on silicon,” said Rosamilia, “We have been riding the silicon train for quite some time but it’s really starting to taper off.” IBM hopes its research will bridge beyond silicon into the next era of semiconductors looking into technologies like carbon nanotube and graphene.

“We are committed to the next generation in fact we are innovating and inventing the next generation of what will follow the silicon era,” Rosamilia noted.

-Written by Kathryn Mykleseth in New York

Semantic Technology Is Transforming

Architecture Infrastructure

5 Ways Semantic Technology Is Transforming the Financial Services Industry

By Marty Loughlin, Wall Street & Technology@insurancetech

Semantic search intuitively finds and connects relevant data across the enterprise. Innovation in this technology is helping organizations simplify and transform operations.

JULY 08, 2014After many years focused on reducing costs, financial service organizations are once again seeking to grow revenue. However, in the intervening years, the business environment has changed dramatically, and these organizations face significant new challenges on the journey back to growth.

 

First, we are in a new era of rapidly evolving regulatory oversight. Organizations must not only comply with an ever-growing list of compliance and reporting requirements, they must also testify to the quality of the data they report on. Second, savvy consumers, many of whom grew up in the age of user-friendly apps and instant data access, are demanding better service and products tailored to their individual needs.

 

Responding to these new challenges will require massive business and IT transformation. In particular, these organizations will need to change how they track, manage, and consume data. For many organizations, this data is not easily accessible — it is distributed across the organization, often trapped in local business units, applications, data warehouses, spreadsheets, and documents.

 

Traditional technologies are struggling to address this challenge and many believe a new approach is required. Some of the new big-data solutions do help. They are good at liberating and colocating data. However, they often struggle to make it usable. Creating a “data lake” where rigid structure is not required can result in yet another silo of unusable data where context, meaning, and sources are lost. Many organizations are turning to semantic technology for the answer.

 

Semantic technology has been around since the late 90s but has recently gained momentum as enterprise-quality applications have emerged that make it operationally viable. Briefly, semantic technology enables data to be described, managed, and consumed in an agile, standardized, human-friendly, and machine-readable way.

 

While search technology allows you to find data, semantic technology enables you to find it, understand it, link it, and take action on it. It is rapidly becoming a data “power tool” for financial services, offering agility and access to data not easily available before.

 

Following are five ways semantic technology is simplifying and transforming operations in the financial industry.

 

1. Selling more products and services
For most organizations, the easiest path to new revenue is to sell more to existing customers. To sell to your customers, you must first know them — who they are, what they buy, how they interact with you, and how they feel about your products and services.

 

Semantic technology unlocks and links silos of diverse customer data (accounts, transactions, interactions, and social media) to create a combined 360-degree view of customer interactions that can be used to make specific, individualized recommendations for the next best action. For example, mining call center transcripts for important life events like marriage or births and cross-referencing this information against the customer’s business interactions can be used to recommend new and relevant products.

Read the rest of this article on Wall Street & Technology

iswc2014.semanticweb.org

  • ISWC 2014 is the premier international forum for the Semantic Web / Linked Data Community. Here, scientists, industry specialists, and practitioners meet to discuss the future of practical, scalable, user-friendly, and game changing solutions.

    • Registration will open soon! Check the website here.
    • For VISA information look at the information page.

    slider image 3

Consortium aims to improve M2M communications

CONSORTIUM SET TO BOOST IOT

Vendors agree on standards so machines can talk

8 July 2014 by Nick Booth -

Consortium set to boost IoTConsortium aims to improve M2M communications

Top technology vendors have teamed up to create a consortium aimed at creating the right conditions for the Internet of Things (IoT) to flourish, creating more demand for data centers and hosting services.

Atmel, Broadcom, Dell, Intel, Samsung and Wind River have jointly established a new industry consortium aiming to improve machine to machine (M2M) communications across form factors, vendors and operating systems.

The Open Interconnect Consortium (OIC) will define a common communications framework based on industry standard technologies for both wireless connection and managing the flow of information across the IoT devices.

The goal is to make the types of form factors, operating systems and service providers irrelevant when machines talk to each other so that the IoT industry develops faster.

Under the scheme member companies will use their software and engineering skills to define a protocol, enforce the use of open source software, and create a certification program.

The OIC said it will specify connectivity options using existing and emerging wireless standards, with the end goal being compatibility across the entire variety of systems.

The consortium takes in a range of industry verticals and smart home vendors, mobile phone makers and office systems developers will participate in the program.

Dell’s CTO for client solutions Glen Robson said the first OIC open source code will be designed for smart homes and office solutions but data centers and enterprises will be catered for.

“The explosion of the IoT is a transformation that will have a major impact and an open, secure and manageable connectivity framework is critical,” Robson said.

Intel’s VP for software and services Doug Fisher said the success of the IoT hinges on common frameworks based on open industry standards.

“Our goal in founding this new consortium is to solve the challenge of connectivity without tying the ecosystem to one company’s solution,” Fisher said.

    Passive Income At Your Finger Tips

  • Gorilla Theme The All In One Gorilla Theme

Chrome Experiments

The WebGL Globe

The WebGL Globe is an open platform for geographic data visualization. We encourage you to copy the code, add your own data, and create your own.

If you do create your own globe, please share it with us. We will post our favorite links below.

Features:

  • Latitude / longitude data spikes
  • Color gradients, based on data value or type
  • Mouse wheel to zoom
  • More features are under development…

Created by the Google Data Arts Team.

    Passive Income At Your Finger Tips

  • Seamless SEO “NEW SEO Technology Causes Page 1 Rankings and a 235% Increase in Traffic In Less Than 24 Hours…”

Google’s future:

Google’s future: microphones in the ceiling and microchips in your head

 unique - Edited

Google’s ideas for a world of search without typing are taking outlandish shape

“I don’t have a microchip in my head – yet,” says the man charged with transforming Google’s relations with the technology giant’s human users.

But Scott Huffman does envisage a world in which Google microphones, embedded in the ceiling, listen to our conversations and interject verbal answers to whatever inquiry is posed.

Huffman, Google’s engineering director, leads a team tasked with making conversations with the search engine more reflective of the complex interactions people enjoy with each other.

The future of the $300 billion business depends upon automatically predicting the search needs of users and then presenting them with the data they need.

“Computing is becoming so inexpensive that it’s inevitable that there will be a ubiquity of connected devices around us, from our lapel to our car to Google Glass [a new optical head-mounted computer],” said Huffman during a visit to the UK from the company’s California base.

A microphone hanging from the ceiling, responding to verbal queries, would remove the need to whip out a phone to remind yourself what time tomorrow’s flight leaves. It could also make sure you don’t miss the flight altogether.

“Like a great personal assistant, it will interrupt you and say ‘ you’ve got to leave now’. It will bring you the information you want,” Mr Huffman said.

In fact, believes Mr Huffman, who has been working on refining search for 15 years, the clunky physical act of typing requests into Google’s search box will gradually recede almost to nothing.

The information could be relayed via “a wearable device, perhaps it might have a small screen, which you can only interact with through your voice and maybe touch but nothing else”.

For play as well as work

The microphone network would have leisure uses too.

“Imagine I can say to a microphone in the ceiling of the room ‘ Can you bring up a video of the highlights of yesterday’s Pittsburgh Steelers game and play it on a TV in the living room?’ and it works because the Cloud means everything is connected,” he says.

“I could ask my Google ‘assistant’ where we should have lunch, that serves French food and isn’t too expensive? Google will go ‘ Ok, we’ll go to that place’ and when I get in my car it should already be navigating to that restaurant. We’re really excited by the idea of multiple devices being able to talk to each other.”

Whether Google users want a microphone embedded in every ceiling is another matter after the company became enveloped in a crisis of trust following Edward Snowden’s revelations about the US Government’s National Security Agency’s clandestine electronic-surveillance programme PRISM.

On Monday, Google joined forces with fellow tech giants including Facebook, Apple and Yahoo! to call for sweeping changes to US surveillance laws and an international ban on bulk collection of data to help preserve the public’s “trust in the internet”.

“We take privacy and security very seriously,” Mr Huffman said. “ Our goal is to keep users’ information private and use it in a way that helps that user. When I ask Google for travel information during my trip it draws it out using my hotel confirmation email. So I’m trusting Google with that information and in exchange I’m getting that value.”

Google believes it can ultimately fulfil people’s data needs by sending results directly to microchips implanted into its user’s brains. Research has already begun with such chips to help disabled people steer their wheelchairs.

“If you think hard enough about certain words they can be picked up by sensors fairly easily. It’ll be interesting to see how that develops,” Mr Huffman said.

His current priority is utilising Google’s Knowledge Graph, an expanding store of information holding 18 billion facts on 60 million subjects, to deliver a more “human” search response. Voice-based search requests are more complex than the two-word searches typed into the search engine.

“My team is working very hard on the idea of a richer conversation with Google. We use a fairly complex linguistic structure in conversation that Google today doesn’t understand.

“But five years from now we will be having that kind of conversation with Google and it will just seem natural. Google will answer you the same way a person would answer.”

The engineer adds: “Google will understand context in conversation but it’s not an armchair psychiatrist. You can’t have a conversation about your mother. Google can’t talk to me about how I feel about things until it understands factual ‘things’. We’re just getting started understanding ‘things’ in the world.”

 

Osho Meditation

EYE-GAZING

Step 1: Look Into the Other
“Sit and look into each other’s eyes, [it is better to blink as little as possible, a soft gaze]. Look deeper and deeper, without thinking.

“If you don’t think, if you just stare into the eyes, soon the waves will disappear and the ocean will be revealed. If you can look deep down into the eyes, you will feel that the man has disappeared, the person has disappeared. Some oceanic phenomenon is hidden behind, and this person was just a waving of a depth, a wave of something unknown, hidden.

“Do it first with a human being, because you are closer to that type of wave. Then move to animals — a little more distant. Then move to trees — still more distant waves; then move to the rocks.

Step 2: The Oshoenic 
“Soon you will become aware of an ocean all around. Then you will see that you are also just a wave; your ego is just a wave.

“Behind that ego, the nameless, the one, is hidden. Only waves are born, the ocean remains the same. The many are born, the one remains the same.”

Osho, Vedanta: Seven Steps to Samadhi Talk #4

 

To continue reading, click here

Osho Meditation

EYE-GAZING

Step 1: Look Into the Other
“Sit and look into each other’s eyes, [it is better to blink as little as possible, a soft gaze]. Look deeper and deeper, without thinking.

“If you don’t think, if you just stare into the eyes, soon the waves will disappear and the ocean will be revealed. If you can look deep down into the eyes, you will feel that the man has disappeared, the person has disappeared. Some oceanic phenomenon is hidden behind, and this person was just a waving of a depth, a wave of something unknown, hidden.

“Do it first with a human being, because you are closer to that type of wave. Then move to animals — a little more distant. Then move to trees — still more distant waves; then move to the rocks.

Step 2: The Oshoenic 
“Soon you will become aware of an ocean all around. Then you will see that you are also just a wave; your ego is just a wave.

“Behind that ego, the nameless, the one, is hidden. Only waves are born, the ocean remains the same. The many are born, the one remains the same.”

Osho, Vedanta: Seven Steps to Samadhi Talk #4

 

To continue reading, click here

Wordpress SEO Plugin by SEOPressor