From: owner-ammf-digest@smoe.org (alt.music.moxy-fruvous digest) To: ammf-digest@smoe.org Subject: alt.music.moxy-fruvous digest V14 #4525 Reply-To: ammf@fruvous.com Sender: owner-ammf-digest@smoe.org Errors-To: owner-ammf-digest@smoe.org Precedence: bulk alt.music.moxy-fruvous digest Monday, July 6 2020 Volume 14 : Number 4525 Today's Subjects: ----------------- Your Angelical Horoscope unveils your future! ["Angela" <**Angela**@harda] Get every thing you need to help restore your kidney health ["Kidney Heal] Unlimited FREE Traffic Solution, Get 1000s of Fresh Backlinks (download now) ["Powerful Traffic Solution" ] Change Your Life Today With Us!!! ["Brian" ] [Yes or No] Would You Fly a Trump 2020 on Your House? ["Trump 2020 Flags"] Master Chef Reveals Secret Recipes ["Favorite Restaurant Dishes" Subject: Your Angelical Horoscope unveils your future! Your Angelical Horoscope unveils your future! http://hardagain.bid/2iCjsky6orzcfgFKD-syeKPIR0lRuQ_FiJamQEDYef6DlVHl http://hardagain.bid/Bsum8xDhNLYSUTplEo81rVYDRbznjI98ibgDkA2LDq_paSrB In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to organise the data as a number of "tables", each table being used for a different type of entity. Each table would contain a fixed number of columns containing the attributes of the entity. One or more columns of each table were designated as a primary key by which the rows of the table could be uniquely identified; cross-references between tables always used these primary keys, rather than disk addresses, and queries would join tables based on these key relationships, using a set of operations based on the mathematical system of relational calculus (from which the model takes its name). Splitting the data into a set of normalized tables (or relations) aimed to ensure that each "fact" was only stored once, thus simplifying update operations. Virtual tables called views could present the data in different ways for different users, but views could not be directly updated. Codd used mathematical terms to define the model: relations, tuples, and domains rather than tables, rows, and columns. The terminology that is now familiar came from early implementations. Codd would later criticize the tendency for practical implementations to depart from the mathematical foundations on which the model was based. In the relational model, records are "linked" using virtual keys not stored in the database but defined as needed between the data contained in the records. The use of primary keys (user-oriented identifiers) to represent cross-table relationships, rather than disk addresses, had two primary motivations. From an engineering perspective, it enabled tables to be relocated and resized without expensive database reorganization. But Codd was more interested in the difference in semantics: the use of explicit identifiers made it easier to define update operations with clean mathematical definitions, and it also enabled query operations to be defined in terms of the established discipline of first-order predicate calculus; because these operations have clean mathematical properties, it becomes possible to rewrite queries in provably correct ways, which is the basis of query optimization. There is no loss of expressiveness compared with the hierarchic or network models, though the connections between tables are no longer so explicit. In the hierarchic and network models, records were allowed to have a complex internal structure. For example, the salary history of an employee might be represented as a "repeating group" within the employee record. In the relational model, the process of normalization led to such internal structures being replaced by data held in multiple tables, connected only by logical keys. For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single variable-length record. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided. As well as identifying rows/records using logical identifiers rather than disk addresses, Codd changed the way in which applications assembled data from multiple records. Rather than requiring applications to gather data one record at a time by navigating the links, they would use a declarative query language that expressed what data was required, rather than the access path by which it should be found. Finding an efficient access path to the data became the responsibility of the database management system, rather than the application programmer. This process, called query optimization, depended on the fact that queries were expressed in terms of mathematical logic. ------------------------------ Date: Mon, 6 Jul 2020 08:50:39 -0400 From: "Kidney Health" Subject: Get every thing you need to help restore your kidney health Get every thing you need to help restore your kidney health http://kidneyhealth.us/nJpVWrUh32aQmu_tticTX8nfBVGrIvhokC2Nd92waHoYfhE http://kidneyhealth.us/vwQToRKP-os3I_66HHvcOKE499MR25W25Rri182VCMBdTE_9 Web search engines get their information by web crawling from site to site. The "spider" checks for the standard filename robots.txt, addressed to it. The robots.txt file contains directives for search spiders, telling it which pages to crawl. After checking for robots.txt and either finding it or not, the spider sends certain information back to be indexed depending on many factors, such as the titles, page content, JavaScript, Cascading Style Sheets (CSS), headings, or its metadata in HTML meta tags. After a certain number of pages crawled, amount of data indexed, or time spent on the website, the spider stops crawling and moves on. "o web crawler may actually crawl the entire reachable web. Due to infinite websites, spider traps, spam, and other exigencies of the real web, crawlers instead apply a crawl policy to determine when the crawling of a site should be deemed sufficient. Some websites are crawled exhaustively, while others are crawled only partially". Indexing means associating words and other definable tokens found on web pages to their domain names and HTML-based fields. The associations are made in a public database, made available for web search queries. A query from a user can be a single word, multiple words or a sentence. The index helps find information relating to the query as quickly as possible. Some of the techniques for indexing, and caching are trade secrets, whereas web crawling is a straightforward process of visiting all sites on a systematic basis. Between visits by the spider, the cached version of page (some or all the content needed to render it) stored in the search engine working memory is quickly sent to an inquirer. If a visit is overdue, the search engine can just act as a web proxy instead. In this case the page may differ from the search terms indexed. The cached page holds the appearance of the version whose words were previously indexed, so a cached version of a page can be useful to the web site when the actual page has been lost, but this problem is also considered a mild form of linkrot. High-level architecture of a standard Web crawler Typically when a user enters a query into a search engine it is a few keywords. The index already has the names of the sites containing the keywords, and these are instantly obtained from the index. The real processing load is in generating the web pages that are the search results list: Every page in the entire list must be weighted according to information in the indexes. Then the top search result item requires the lookup, reconstruction, and markup of the snippets showing the context of the keywords matched. These are only part of the processing each search results web page requires, and further pages (next to the top) require more of this post processing. Beyond simple keyword lookups, search engines offer their own GUI- or command-driven operators and search parameters to refine the search results. These provide the necessary controls for the user engaged in the feedback loop users create by filtering and weighting while refining the search results, given the initial pages of the first search results. For example, from 2007 the Google.com search engine has allowed one to filter by date by clicking "Show search tools" in the leftmost column of the initial search results page, and then selecting the desired date range. It's also possible to weight by date because each page has a modification time. Most search engines support the use of the boolean operators AND, OR and NOT to help end users refine the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. As well, natural language queries allow the user to type a question in the same ------------------------------ Date: Mon, 6 Jul 2020 08:16:28 -0400 From: "Powerful Traffic Solution" Subject: Unlimited FREE Traffic Solution, Get 1000s of Fresh Backlinks (download now) Unlimited FREE Traffic Solution, Get 1000s of Fresh Backlinks (download now) http://crisplinks.co/iI80KxaQOkCtnDsBkfTelOxQNE4yd_nZT3A-bsu9RVt15XKh http://crisplinks.co/2oJaGcfDE-EqowP2rz1ieDDwX96_UshL6QT7s2qEO0C4r1fw A data warehouse (DW) is a repository of an organization's electronically stored data. Data warehouses are designed to manage and store the data. Data warehouses differ from business intelligence (BI) systems, because BI systems are designed to use data to create reports and analyze the information, to provide strategic guidance to management. Metadata is an important tool in how data is stored in data warehouses. The purpose of a data warehouse is to house standardized, structured, consistent, integrated, correct, "cleaned" and timely data, extracted from various operational systems in an organization. The extracted data are integrated in the data warehouse environment to provide an enterprise-wide perspective. Data are structured in a way to serve the reporting and analytic requirements. The design of structural metadata commonality using a data modeling method such as entity relationship model diagramming is important in any data warehouse development effort. They detail metadata on each piece of data in the data warehouse. An essential component of a data warehouse/business intelligence system is the metadata and tools to manage and retrieve the metadata. Ralph Kimball[page needed] describes metadata as the DNA of the data warehouse as metadata defines the elements of the data warehouse and how they work together. Kimball et al. refers to three main categories of metadata: Technical metadata, business metadata and process metadata. Technical metadata is primarily definitional, while business metadata and process metadata is primarily descriptive. The categories sometimes overlap. Technical metadata defines the objects and processes in a DW/BI system, as seen from a technical point of view. The technical metadata includes the system metadata, which defines the data structures such as tables, fields, data types, indexes and partitions in the relational engine, as well as databases, dimensions, measures, and data mining models. Technical metadata defines the data model and the way it is displayed for the users, with the reports, schedules, distribution lists, and user security rights. Business metadata is content from the data warehouse described in more user-friendly terms. The business metadata tells you what data you have, where they come from, what they mean and what their relationship is to other data in the data warehouse. Business metadata may also serve as a documentation for the DW/BI system. Users who browse the data warehouse are primarily viewing the business metadata. Process metadata is used to describe the results of various operations in the data warehouse. Within the ETL process, all key data from tasks is logged on execution. This includes start time, end time, CPU seconds used, disk reads, disk writes, and rows processed. When troubleshooting the ETL or query process, this sort of data becomes valuable. Process metadata is the fact measurement when building and using a DW/BI system. Some organizations make a living out of collecting and selling this sort of data to companies - in that case the process metadata becomes the business metadata for the fact and dimension tables. Collecting process metadata is in the interest of business people who can use the data to identify the users of their products, which products they are using, and what level of service they are receiving. ------------------------------ Date: Mon, 6 Jul 2020 05:03:14 -0400 From: "Car Warranty" Subject: Choice Auto Warranty Choice Auto Warranty http://smartnets.bid/sy9En9R3zn-ZaKHgObb8D8OYRUkcR6uBKBwu7fIdGhzH1Npd http://smartnets.bid/q_lZu0wZ7O2cwKHq-vU8NaQ25BLo-rgs4euz5sDIqEbRhRdZ A database management system provides three views of the database data: The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level. The conceptual level unifies the various external views into a compatible global view. It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators. The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities. While there is typically only one conceptual (or logical) and physical (or internal) view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are the interest of the human resources department. Thus different departments need different views of the company's database. The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance. The conceptual view provides a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types. ------------------------------ Date: Mon, 6 Jul 2020 07:27:31 -0400 From: "Brian" Subject: Change Your Life Today With Us!!! Change Your Life Today With Us!!! http://dogdentist.bid/t3s3o_jL99IYpaB8V5HGzg3C49aTp3A2PjHE21rsG_N5AyGC http://dogdentist.bid/86WMzJ62387y8qu6Z9RkOnhgFmHOc2OqOXNVE3o7UFyvDrtd Until the advent of non-volatile computer memories like USB sticks, persistent data storage was traditionally achieved by writing the data to external block devices like magnetic tape and disk drives. These devices typically seek to a location on the magnetic media and then read or write blocks of data of a predetermined size. In this case, the seek location on the media, is the data key and the blocks are the data values. Early data file-systems, or disc operating systems used to reserve contiguous blocks on the disc drive for data files. In those systems, the files could be filled up, running out of data space before all the data had been written to them. Thus much unused data space was reserved unproductively to avoid incurring that situation. This was known as raw disk. Later file-systems introduced partitions. They reserved blocks of disc data space for partitions and used the allocated blocks more economically, by dynamically assigning blocks of a partition to a file as needed. To achieve this, the file-system had to keep track of which blocks were used or unused by data files in a catalog or file allocation table. Though this made better use of the disc data space, it resulted in fragmentation of files across the disc, and a concomitant performance overhead due to latency. Modern file systems reorganize fragmented files dynamically to optimize file access times. Further developments in file systems resulted in virtualization of disc drives i.e. where a logical drive can be defined as partitions from a number of physical drives. Indexed data Retrieving a small subset of data from a much larger set implies searching though the data sequentially. This is uneconomical. Indexes are a way to copy out keys and location addresses from data structures in files, tables and data sets, then organize them using inverted tree structures to reduce the time taken to retrieve a subset of the original data. In order to do this, the key of the subset of data to be retrieved must be known before retrieval begins. The most popular indexes are the B-tree and the dynamic hash key indexing methods. Indexing is yet another costly overhead for filing and retrieving data. There are other ways of organizing indexes, e.g. sorting the keys or correction of quantities (or even the key and the data together), and using a binary search on them. Abstraction and indirection Object orientation uses two basic concepts for understanding data and software: 1) The taxonomic rank-structure of program-code classes, which is an example of a hierarchical data structure; and 2) At run time, the creation of data key references to in-memory data-structures of objects that have been instantiated from a class library. It is only after instantiation that an executing object of a specified class exists. After an object's key reference is nullified, the data referred to by that object ceases to be data because the data key reference is null; and therefore the object also ceases to exist. The memory locations where the object's data was stored are then referred to as garbage and are reclassified as unused memory available for reuse. Database data The advent of databases introduced a further layer of abstraction for persistent data storage. Databases use meta data, and a structured query language protocol between client and server systems, communicating over a network, using a two phase commit logging system to ensure transactional completeness, when persisting data. ------------------------------ Date: Mon, 6 Jul 2020 04:15:09 -0400 From: "Trump 2020 Flags" Subject: [Yes or No] Would You Fly a Trump 2020 on Your House? [Yes or No] Would You Fly a Trump 2020 on Your House? http://hardagain.bid/ZeSNTpR3TJny8GMZgtS4I0ysZsEvXVgdHqJP6ttkZ1ewig7R http://hardagain.bid/1A-mUvxVqTLpCBF9qREIUugCffY1L0i9WBKsk8kyjeHvSis The legal framework around authorised sea-raiding was considerably murkier outside of Europe. Unfamiliarity with local forms of authority created difficulty determining who was legitimately sovereign on land and at sea, whether to accept their authority, or whether the opposing parties were, in fact, pirates. Mediterranean corsairs operated with a style of patriotic-religious authority that Europeans, and later Americans, found difficult to understand and accept. It did not help that many European privateers happily accepted commissions from the deys of Algiers, Tangiers and Tunis. The sultans of the Sulu archipelago (now present-day Philippines) held only a tenuous authority over the local Iranun communities of slave-raiders. The sultans created a carefully spun web of marital and political alliances in an attempt to control unauthorised raiding that would provoke war against them. In Malay political systems, the legitimacy and strength of their Sultan's management of trade determined the extent he exerted control over the sea-raiding of his coastal people. Privateers were implicated in piracy for a number of complex reasons. For colonial authorities, successful privateers were skilled seafarers who brought in much-needed revenue, especially in newly settled colonial outposts. These skills and benefits often caused local authorities to overlook a privateer's shift into piracy when a war ended. The French Governor of Petit-Goave gave buccaneer Francois Grogniet blank privateering commissions, which Grogniet traded to Edward Davis for a spare ship so the two could continue raiding Spanish cities under a guise of legitimacy. New York Governors Jacob Leisler and Benjamin Fletcher were removed from office in part for their dealings with pirates such as Thomas Tew, to whom Fletcher had granted commissions to sail against the French, but who ignored his commission to raid Mughal shipping in the Red Sea instead. Some privateers faced prosecution for piracy. William Kidd accepted a commission from the British king William to hunt pirates but was later hanged for piracy. He had been unable to produce the papers of the prizes he had captured to prove his innocence. Privateering commissions were easy to obtain during wartime but when the war ended and sovereigns recalled the privateers, many refused to give up the lucrative business and turned to piracy. Boston minister Cotton Mather lamented after the execution of pirate John Quelch: "Yea, Since the Privateering Stroke, so easily degenerates into the Piratical; and the Privateering Trade, is usually carried on with so Unchristian a Temper, and proves an inlet unto so much Debauchery, and Iniquity, and Confusion, I believe, I shall have Good men Concur with me, in wishing, That Privateering may no more be practised ------------------------------ Date: Mon, 6 Jul 2020 07:41:35 -0400 From: "Favorite Restaurant Dishes" Subject: Master Chef Reveals Secret Recipes Master Chef Reveals Secret Recipes http://favoriterestaurant.us/6pau3fqXRHBcuQYuRfbKr3sik5ECdbDZ3bMeODqhGxIeDx5V http://favoriterestaurant.us/Y8wGLe9tKTYEkPBKSPM61ds_aoX4FnYpKL0YLMf_GRBbUsD1 For example, a digital image may include metadata that describes how large the picture is, the color depth, the image resolution, when the image was created, the shutter speed, and other data. A text document's metadata may contain information about how long the document is, who the author is, when the document was written, and a short summary of the document. Metadata within web pages can also contain descriptions of page content, as well as key words linked to the content. These links are often called "Metatags", which were used as the primary factor in determining order for a web search until the late 1990s. The reliance of metatags in web searches was decreased in the late 1990s because of "keyword stuffing". Metatags were being largely misused to trick search engines into thinking some websites had more relevance in the search than they really did. Metadata can be stored and managed in a database, often called a metadata registry or metadata repository. However, without context and a point of reference, it might be impossible to identify metadata just by looking at it. For example: by itself, a database containing several numbers, all 13 digits long could be the results of calculations or a list of numbers to plug into an equation - without any other context, the numbers themselves can be perceived as the data. But if given the context that this database is a log of a book collection, those 13-digit numbers may now be identified as ISBNs - information that refers to the book, but is not itself the information within the book. The term "metadata" was coined in 1968 by Philip Bagley, in his book "Extension of Programming Language Concepts" where it is clear that he uses the term in the ISO 11179 "traditional" sense, which is "structural metadata" i.e. "data about the containers of data"; rather than the alternative sense "content about individual instances of data content" or metacontent, the type of data usually found in library catalogues. Since then the fields of information management, information science, information technology, librarianship, and GIS have widely adopted the term. In these fields the word metadata is defined as "data about data".[page needed] While this is the generally accepted definition, various disciplines have adopted their own more specific explanation and uses of the term. Types While the metadata application is manifold, covering a large variety of fields, there are specialized and well-accepted models to specify types of metadata. Bretherton & Singley (1994) distinguish between two distinct classes: structural/control metadata and guide metadata. Structural metadata describes the structure of database objects such as tables, columns, keys and indexes. Guide metadata helps humans find specific items and are usually expressed as a set of keywords in a natural language. According to Ralph Kimball metadata can be divided into 2 similar categories: technical metadata and business metadata. Technical metadata corresponds to internal metadata, and business metadata corresponds to external metadata. Kimball adds a third category, process metadata. On the other hand, NISO distinguishes among three types of metadata: descriptive, structural, and administrative. Descriptive metadata is typically used for discovery and identification, as information to search and locate an object, such as title, author, subjects, keywords, publisher. Structural metadata describes how the components of an object are organized. An example of structural metadata would be how pages are ordered to form chapters of a book. Finally, administrative metadata gives information to help manage the source. Administrative metadata refers to the technical information, including file type, or when and how the file was created. Two sub-types of administrative metadata are rights management metadata and preservation metadata. Rights management metadata explains intellectual property rights, while preservation metadata contains information to preserve and save a resource.[page needed] Statistical data repositories have their own requirements for metadata in order to describe not only the source and quality of the data but also what statistical processes were used to create the data, which is of particular importance to the statistical community in order to both validate and improve the process of statistical data production ------------------------------ Date: Mon, 6 Jul 2020 09:07:29 -0400 From: "Golf Swing" Subject: Watch this FREE presentation right away. Watch this FREE presentation right away. http://monstergolf.us/VGuA3n5R7vL43lVCITT_qrT8JYmM7ME7HL6PSYFXqpEoNpQ http://monstergolf.us/ZYsNv-LYQ40EPGMf1-Go4w-H99de0t5XklLNrM4y8kMCPm3j In water, staying afloat is possible using buoyancy. If an animal's body is less dense than water, it can stay afloat. This requires little energy to maintain a vertical position, but requires more energy for locomotion in the horizontal plane compared to less buoyant animals. The drag encountered in water is much greater than in air. Morphology is therefore important for efficient locomotion, which is in most cases essential for basic functions such as catching prey. A fusiform, torpedo-like body form is seen in many aquatic animals, though the mechanisms they use for locomotion are diverse. The primary means by which fish generate thrust is by oscillating the body from side-to-side, the resulting wave motion ending at a large tail fin. Finer control, such as for slow movements, is often achieved with thrust from pectoral fins (or front limbs in marine mammals). Some fish, e.g. the spotted ratfish (Hydrolagus colliei) and batiform fish (electric rays, sawfishes, guitarfishes, skates and stingrays) use their pectoral fins as the primary means of locomotion, sometimes termed labriform swimming. Marine mammals oscillate their body in an up-and-down (dorso-ventral) direction. Other animals, e.g. penguins, diving ducks, move underwater in a manner which has been termed "aquatic flying". Some fish propel themselves without a wave motion of the body, as in the slow-moving seahorses and Gymnotus. Other animals, such as cephalopods, use jet propulsion to travel fast, taking in water then squirting it back out in an explosive burst. Other swimming animals may rely predominantly on their limbs, much as humans do when swimming. Though life on land originated from the seas, terrestrial animals have returned to an aquatic lifestyle on several occasions, such as the fully aquatic cetaceans, now very distinct from their terrestrial ancestors. Dolphins sometimes ride on the bow waves created by boats or surf on naturally breaking waves. Benthic Scallop in jumping motion; these bivalves can also swim. Benthic locomotion is movement by animals that live on, in, or near the bottom of aquatic environments. In the sea, many animals walk over the seabed. Echinoderms primarily use their tube feet to move about. The tube feet typically have a tip shaped like a suction pad that can create a vacuum through contraction of muscles. This, along with some stickiness from the secretion of mucus, provides adhesion. Waves of tube feet contractions and relaxations move along the adherent surface and the animal moves slowly along. Some sea urchins also use their spines for benthic locomotion ------------------------------ End of alt.music.moxy-fruvous digest V14 #4525 **********************************************